CN113868171A - Interconnection system - Google Patents

Interconnection system Download PDF

Info

Publication number
CN113868171A
CN113868171A CN202111142579.1A CN202111142579A CN113868171A CN 113868171 A CN113868171 A CN 113868171A CN 202111142579 A CN202111142579 A CN 202111142579A CN 113868171 A CN113868171 A CN 113868171A
Authority
CN
China
Prior art keywords
package
module
interconnect
interconnected
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111142579.1A
Other languages
Chinese (zh)
Inventor
王惟林
康潇亮
张学敏
陈晨
石阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhaoxin Semiconductor Co Ltd
Original Assignee
VIA Alliance Semiconductor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by VIA Alliance Semiconductor Co Ltd filed Critical VIA Alliance Semiconductor Co Ltd
Priority to CN202111142579.1A priority Critical patent/CN113868171A/en
Priority to US17/506,124 priority patent/US11853250B2/en
Priority to US17/506,144 priority patent/US11675729B2/en
Priority to US17/511,800 priority patent/US11526460B1/en
Priority to US17/523,049 priority patent/US12001375B2/en
Publication of CN113868171A publication Critical patent/CN113868171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1004Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's to protect a block of data words, e.g. CRC or checksum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • Quality & Reliability (AREA)
  • Multi Processors (AREA)

Abstract

An interconnect system includes a plurality of packages, and a first interconnect interface. Any two of the packages may access each other's hardware resources by transmitting the first and second packets through the first interconnect interface. The first packet includes first interconnection information for establishing communication between the two packages. The second packet includes a first data payload, loaded from one of the two packages. The package includes a first package and a second package arranged to be connected to each other through a first interconnect interface.

Description

Interconnection system
Technical Field
The present invention relates to the field of Integrated Circuits (ICs), and more particularly to an interconnect system including a plurality of packages (sockets) and an interconnect interface.
Background
Traditional cross-chip transmission is implemented with a high-speed serial bus (PCIE).
However, the bandwidth of the high-speed serial bus (PCIE) at the transmitting end or the receiving end is usually inconsistent with the bandwidth of the transmission channel, so that there are problems of data congestion and transmission bandwidth being not fully utilized during data transmission.
In addition, while considering data transmission performance, fine circuit design and extreme space utilization between chips are also important issues.
Therefore, there is a need for a cross-chip interconnect system with low latency, high bandwidth utilization, and high space utilization.
Disclosure of Invention
An embodiment of the invention provides a chip-cross interconnect (chip-cross interconnect) system, which includes a plurality of packages (sockets) and a first interconnect interface. Any two of the packages may access each other's hardware resources by transmitting the first and second packets through the first interconnect interface. The first packet includes first interconnection information for establishing communication between the two packages. The second packet includes a first data payload (data payload) loaded from one of the two encapsulations. The package includes a first package and a second package arranged to be connected to each other through a first interconnect interface.
In some embodiments, the first packet further includes a first header and a first check code. The first header is used to mark the attribute of the first interconnection information. The first check code is used to check the correctness of the first interconnection information.
In some embodiments, the number of bits occupied by the first interconnection information is fixed.
In some embodiments, the second packet also includes a second header and a second parity code. The second header is used to mark the attribute of the first data payload. The second check code is used to check the correctness of the first data payload.
In some embodiments, the first data payload is greater in bits when the first interconnect interface is congested than when the first interconnect interface is not congested.
In some embodiments, each package includes a plurality of dies (die), and a second interconnect interface. Any two of the chips may access each other's hardware resources by transmitting a third packet through the second interconnect interface. The chip includes a first chip and a second chip, the first chip and the second chip are configured to be connected to each other through a second interconnection interface. The third packet includes second interconnection information and a second data payload. The second interconnection information is used to establish communication between the two wafers. The second data payload is loaded from one of the two dies.
In some embodiments, the third packet includes a third header and a third check code. The third header is used to mark the second data payload and the attribute of the second interconnection information. The third check code is used to check the correctness of the second data payload and the second interconnection information.
In some embodiments, the number of bits occupied by the second data payload and the number of bits occupied by the second interconnection information are fixed.
In some embodiments, the hardware resources include a Last Level Cache (LLC). The first and second packets are used to maintain cache coherency between the last level caches of any two packages.
In some embodiments, the package further comprises a third package, the first package and the third package being arranged to be interconnected by the first interconnect interface, the second package and the third package being arranged to be interconnected by the first interconnect interface.
In some embodiments, the package further comprises a third package and a fourth package, the first package and the third package being arranged to be interconnected by the first interconnect interface, the second package and the fourth package being arranged to be interconnected by the first interconnect interface.
In some embodiments, the first package, the second package, the third package and the fourth package are all located on the same plane.
In some embodiments, the first package and the fourth package are further configured to be interconnected by a first interconnect interface, and the second package and the third package are further configured to be interconnected by the first interconnect interface.
In some embodiments, the packages also include a fifth package and a sixth package. The first package and the fifth package are arranged to be interconnected via a first interconnect interface, the fourth package and the fifth package are arranged to be interconnected via the first interconnect interface, the fifth package and the sixth package are arranged to be interconnected via the first interconnect interface, the second package and the sixth package are arranged to be interconnected via the first interconnect interface, and the third package and the sixth package are arranged to be interconnected via the first interconnect interface. The fifth package is located on the second plane and the sixth package is located on the third plane. The first plane, the second plane and the third plane are parallel to each other, and the first plane is arranged between the second plane and the third plane.
In some embodiments, the packages further comprise a fifth package, a sixth package, a seventh package and an eighth package, wherein the fifth package and the sixth package are arranged to be interconnected by a first interconnection interface, the fifth package and the seventh package are arranged to be interconnected by a first interconnection interface, the sixth package and the eighth package are arranged to be interconnected by a first interconnection interface, the seventh package and the eighth package are arranged to be interconnected by a first interconnection interface, the first package and the fifth package are arranged to be interconnected by a first interconnection interface, the second package and the sixth package are arranged to be interconnected by a first interconnection interface, the third package and the seventh package are arranged to be interconnected by a first interconnection interface, and the fourth package and the eighth package are arranged to be interconnected by a first interconnection interface. The fifth package, the sixth package, the seventh package and the eighth package are all located on a second plane, and the second plane is parallel to the first plane.
The application discloses a cross-chip interconnection system which achieves low time delay, high bandwidth utilization rate and high space utilization rate through an interconnection interface between packages and between wafers and an interconnection topological structure design between the packages.
Drawings
The present disclosure will be better understood from the following description of exemplary embodiments taken in conjunction with the accompanying drawings. Moreover, it should be understood that the order of execution of the blocks in the flowcharts of the present disclosure may be changed, and/or some blocks may be changed, eliminated, or combined.
FIG. 1 is a block diagram of an exemplary system according to an embodiment of the invention.
FIG. 2 is a schematic diagram of an exemplary system according to an embodiment of the present invention.
FIG. 3A is a schematic diagram illustrating an exemplary system topology according to an embodiment of the present invention.
FIG. 3B is a schematic diagram illustrating an exemplary system topology according to an embodiment of the invention.
FIG. 4 is a schematic diagram of an exemplary system according to an embodiment of the present invention.
FIG. 5 is a schematic diagram of an exemplary system according to an embodiment of the present invention.
FIG. 6 is a block diagram of an exemplary system according to an embodiment of the invention.
Fig. 7A is a block diagram of an exemplary system according to an embodiment of the invention.
FIG. 7B is a block diagram of an exemplary system according to an embodiment of the invention.
FIG. 7C is a block diagram of an exemplary system according to an embodiment of the invention.
Fig. 7D is a block diagram of an exemplary system according to an embodiment of the invention.
FIG. 7E is a block diagram of an exemplary system according to an embodiment of the invention.
FIG. 8 illustrates ZPI/ZDI communication architecture, according to an embodiment of the present invention.
Fig. 9A and 9B illustrate in waveform diagram the input/output protocol between the device and ZPI/ZDI.
Fig. 10A and 10B illustrate the packet transmission path and hardware structure ZPI between two packages according to an embodiment of the invention.
FIGS. 11A and 11B illustrate the packet transmission path and hardware structure of ZDI between two chips according to embodiments of the invention.
Fig. 12A is a diagram illustrating a format of a first packet according to an embodiment of the invention.
Fig. 12B is a diagram illustrating a format of a second packet according to an embodiment of the invention.
Fig. 13 is a diagram illustrating a format of a third packet according to an embodiment of the invention.
Detailed Description
The following description sets forth various embodiments of the invention, but are not intended to limit the invention. The actual scope of the invention is defined by the claims.
In each of the embodiments listed below, the same or similar elements or components will be denoted by the same reference numerals.
The invention discloses a cross-chip interconnection system, which comprises a plurality of packages and interconnection interfaces among the packages, wherein the packages are arranged among each other to communicate through a first interconnection interface. The following description will refer to the first interconnect interface as ZPI.
FIG. 1 is a block diagram of an exemplary system 10 according to an embodiment of the invention. As shown in fig. 1, the system 10 contains two packages: socket0 and socket1, and ZPI therebetween. The package socket0 and socket1 are interconnected by ZPI. In the example of fig. 1, there are two clusters (clusters) in each package, labeled cluster0 and cluster1, respectively. In other cases, each package may contain one or more clusters. Each cluster contains several Central Processing Unit (CPU) cores (not shown in fig. 1). Within each package may be a Last Level Cache (LLC), an interconnect bus, and various other components (e.g., input/output controllers, clock modules, power consumption modules …, etc.). Each package may also connect a two-wire memory module (DIMM).
In system 10, package socket0 and socket1 may communicate by transmitting packets having a particular format to each other through ZPI. Thus, the CPU core in package socket0 can access the hardware resources of socket1 (such as LLC, DIMM, or other storage medium). Likewise, the CPU core in package socket1 may also access the hardware resources of socket 0. In this way, the CPU core and i/o resources of all clusters in system 10 can be managed and scheduled uniformly, and the hardware resources owned by socket0 and socket1 can be used uniformly. For example, any CPU core or I/O device in system 10 may access memory resources owned by package sockets 0 and 1. As another example, encapsulating socket0 and socket1 may maintain cache coherency with respect to each other by ZPI transmitting packets that maintain cache coherency.
FIG. 2 is a schematic diagram illustrating an exemplary system 20 according to an embodiment of the present invention. As shown in FIG. 2, system 20 includes package 201, package 202, and package 203. Package 201 and package 202 are interconnected via ZPI, package 202 and package 203 are interconnected via ZPI, and package 201 and package 203 are interconnected via ZPI to form a three package ring topology.
As described above with respect to system 10 of fig. 1, any two of package 201, package 202, and package 203 shown in fig. 2 may access each other's hardware resources by transmitting packets through ZPI.
FIG. 3A is a schematic diagram illustrating an exemplary system 30A according to an embodiment of the present invention. As shown in FIG. 3A, system 30A includes package 301, package 302, package 303, and package 304, all of which are located on the same plane. Package 301 and package 302 are interconnected via ZPI, package 302 and package 304 are interconnected via ZPI, package 304 and package 303 are interconnected via ZPI, and package 303 and package 301 are interconnected via ZPI to form a four package ring topology.
As described above with respect to the system 10 of fig. 1 or the system 20 of fig. 2, any two of the package 301, the package 302, the package 303, and the package 304 shown in fig. 3A may access each other's hardware resources by transmitting packets through the network ZPI. It should be noted that although package 301 and package 304 are not directly connected to each other through ZPI, package 301 and package 304 may still communicate with each other through intermediate nodes (package 302 or package 303) by transmitting packets through ZPI. Likewise, although packages 302 and 303 are not directly connected to each other through ZPI, packages 302 and 303 may still communicate with each other through intermediate nodes (packages 301 or 304) by transmitting packets through ZPI.
FIG. 3B is a schematic diagram illustrating an exemplary system 30B according to an embodiment of the present invention. In contrast to system 30A of fig. 3A, in system 30B, package 302 and package 303 are directly connected to each other via ZPI, and package 301 and package 304 are directly connected to each other via ZPI. Thus, communication between the packages 302 and 303, and between the packages 301 and 304, does not have to pass through intermediate nodes.
Fig. 4 is a schematic diagram illustrating an exemplary system 40 topology according to an embodiment of the present invention. As shown in fig. 4, system 40 includes a package 401, a package 402, a package 403, a package 404, a package 405, and a package 406, wherein package 401, package 402, package 403, and package 404 are located on the same plane, hereinafter referred to as "first plane"; package 405 is located on a second plane and package 406 is located on a third plane. The first plane, the second plane and the third plane are parallel to each other, and the first plane is arranged between the second plane and the third plane. Package 401 and package 402 are interconnected by ZPI, package 402 and package 404 are interconnected by ZPI, package 404 and package 403 are interconnected by ZPI, and package 403 and package 401 are interconnected by ZPI to form a four package ring topology in a first plane. Package 405 is interconnected with packages 401, 404, and 406, respectively, through ZPI, and package 406 is interconnected with packages 402, 403, and 405, respectively, through ZPI. Thus, the six packages shown in FIG. 4 form a three-layer three-dimensional topology. In other examples, the topology of more levels (planes) of interconnects may be extended based on the system 40 of FIG. 4, and the number of packages per level (plane) may also be extended to larger values.
As described above with respect to the system 10 of fig. 1, the system 20 of fig. 2, or the system 30A of fig. 3A, any two of the packages 401, 402, 403, 404, 405, and 406 shown in fig. 4 may access each other's hardware resources by transmitting packets through ZPI. Although packages 401 and 404 are not directly connected to each other through ZPI, packages 401 and 404 may still communicate with each other through intermediate nodes (package 402 or package 403) by transmitting packets through ZPI. Likewise, although packages 402 and 403 are not directly connected to each other through ZPI, packages 402 and 403 may still communicate with each other through intermediate nodes (packages 401 or 404) by transmitting packets through ZPI; although package 405 and package 402 are not directly connected to each other through ZPI, package 405 and package 402 may still communicate with each other through intermediate nodes (package 401 or package 404) by transmitting packets through ZPI; although package 405 and package 403 are not directly connected to each other through ZPI, package 405 and package 403 may still communicate with each other through intermediate nodes (package 401 or package 404) by transmitting packets through ZPI; although package 406 and package 401 are not directly connected to each other through ZPI, package 406 and package 401 may still communicate with each other through intermediate nodes (package 402 or package 403) by transmitting packets through ZPI; although package 406 and package 404 are not directly connected to each other through ZPI, package 406 and package 404 may still communicate with each other through intermediate nodes (package 402 or package 403) by transmitting packets through ZPI.
Fig. 5 is a schematic diagram illustrating an exemplary system 50 according to an embodiment of the present invention. As shown in FIG. 5, system 50 contains package 501, package 502, package 503, package 504, package 505, package 506, package 507, and package 508, where package 501, package 502, package 503, and package 504 are located on the same plane, hereinafter "plane A"; package 505, package 506, package 507, and package 508 are located on another same plane, hereinafter "plane B". Plane a and plane B are parallel to each other. Package 501 and package 502 are interconnected by ZPI, package 502 and package 504 are interconnected by ZPI, package 504 and package 503 are interconnected by ZPI, and package 503 and package 501 are interconnected by ZPI to form a four package ring topology in plane a. Package 505 and package 506 are interconnected by ZPI, package 506 and package 508 are interconnected by ZPI, package 508 and package 507 are interconnected by ZPI, and package 507 and package 505 are interconnected by ZPI to form a four package ring topology on plane B. Package 501, package 502, package 503, and package 504 on plane a are interconnected with package 505, package 506, package 507, and package 508 on plane B, respectively, through ZPI. Thus, the eight packages shown in fig. 5 form a two-layer three-dimensional topology. In other examples, the topology of more levels (planes) of interconnects may be extended based on the system 50 of FIG. 5, and the number of packages per level (plane) may be extended to larger values.
As described above with respect to system 10 of fig. 1, system 20 of fig. 2, system 30A of fig. 3A, or system 40 of fig. 4, any two of package 501, package 502, package 503, package 504, package 505, package 506, package 507, and package 508 shown in fig. 5 may access each other's hardware resources by transmitting packets through ZPI. Although package 501 and package 504 are not directly connected to each other through ZPI, package 501 and package 504 may still communicate with each other through intermediate nodes (package 502 or package 503) by transmitting packets through ZPI. Likewise, although packages 502 and 503 are not directly connected to each other through ZPI, packages 502 and 503 may still communicate with each other through intermediate nodes (packages 501 or 503) by transmitting packets through ZPI; although package 505 and package 508 are not directly connected to each other through ZPI, package 505 and package 508 may still communicate with each other through intermediate nodes (package 506 or package 507) by transmitting packets through ZPI; although package 506 and package 507 are not directly connected to each other through ZPI, package 506 and package 507 may still communicate with each other through intermediate nodes (package 505 or package 508) by transmitting packets through ZPI; although package 505 and package 508 are not directly connected to each other through ZPI, package 505 and package 508 may still communicate with each other through intermediate nodes (package 506 or package 507) by transmitting packets through ZPI.
As mentioned in the paragraph above, for two encapsulations that cross a plane and are not directly connected to each other through ZPI, the routing path for packet transmission may be following the following rules: firstly, crossing a plane, and then taking the shortest path on the plane; and when there are multiple shortest path candidates in the same plane, the next transmission node is determined in the clockwise direction. For example, from the package 501, the packet is transmitted to the package 508 through ZPI, and the routing path may be "package 501 → package 505 → package 506 → package 508", wherein the rule "cross plane first" is followed from package 501 to package 505, and the rule "determine next transmission node clockwise" is followed from package 505 to package 506 to package 508. As another example, from the package 506, the packet is transmitted to the package 503 through ZPI, and the routing path may be "package 506 → package 502 → package 504 → package 503", wherein the rule of "cross plane first" is followed from package 506 to package 502, and the rule of "determine next transmission node clockwise" is followed from package 502 to package 504 to package 503. However, the routing path of the packet transmission may follow other rules, and the present invention is not limited thereto. For example, when there are multiple shortest path candidates on the same plane, the next transmission node may be determined in the counterclockwise direction.
According to embodiments of the present invention, each package in a cross-chip interconnection system may include a plurality of dies (die) and a second interconnection interface, the dies being disposed with respect to each other to communicate through the second interconnection interface. The following description will refer to the second interconnect interface as ZDI.
The topology of the interconnect system of fig. 2-5 may also be used for inter-die interconnects and inter-die interconnects (chiplets) according to other embodiments of the present invention, i.e., the package of fig. 2-5 may be replaced with a die or a die. The connection transmission mode of inter-wafer interconnection, ZDI, is described in detail below, and the core is a heterogeneous integrated system, i.e., different components are designed on separate dies. In one embodiment of the invention, the process of production of the core particles may be different.
Fig. 6 is a block diagram of an exemplary system 60 according to an embodiment of the invention. As shown in fig. 6, the package 600 in the system 60 contains two dies: die0 and Die1, and ZDI therebetween. Die0 and Die1 are interconnected by ZDI. In other cases, there may be a greater number of dies in a package. In the example of FIG. 6, there are two clusters (clusters) in each wafer, labeled cluster0 and cluster1, respectively. In other cases, one or more clusters may be present in each wafer. Each cluster contains several Central Processing Unit (CPU) cores (not shown in fig. 6). In addition, there may be a Last Level Cache (LLC), an interconnect bus, and various other components (such as input/output controllers, clock modules, power consumption modules …, etc.) in each die.
In system 60, Die0 and Die1 may communicate by transmitting packets having a particular format to each other through ZDI. Thus, the CPU core in Die0 can access the hardware resources of Die 1. Similarly, CPU cores in Die1 may also access hardware resources of Die 0.
ZPI, described above, can be used with ZDI's to allow chips in different packages to communicate with each other.
Fig. 7A is a block diagram illustrating an exemplary system 70A according to an embodiment of the invention. System 70A is based on system 20 of FIG. 2, each package having Die Die0 and Die1, respectively, Die0 and Die1 being interconnected by ZDI. In this way, the hardware resources of the six chips in FIG. 7A can be shared. In other examples, the number of die within each package may be greater.
FIG. 7B is a block diagram illustrating an exemplary system 70B according to an embodiment of the invention. System 70B is based on system 30A of FIG. 3A, each package having Die0 and Die1, respectively, Die0 and Die1 being interconnected by ZDI. In this way, the hardware resources of the eight chips in FIG. 7B can be shared. In other examples, the number of die within each package may be greater.
Fig. 7C is a block diagram of an exemplary system 70C according to an embodiment of the invention. System 70C is based on system 30B of FIG. 3B, each package having Die0 and Die1, respectively, Die0 and Die1 being interconnected by ZDI. In this way, the hardware resources of the eight chips in fig. 7C can be shared. Compared to the system 70B of fig. 7B, the system 70C has more ZPI directly connecting the two packages, resulting in a shorter communication path for the chips of the different packages.
Fig. 7D is a block diagram of an exemplary system 70D according to an embodiment of the invention. System 70D is based on system 40 of FIG. 4, each package having dies D0 and D1, respectively, die D0 and D1 being interconnected by ZDI. In this way, the hardware resources of the twelve chips in FIG. 7D can be shared. In other examples, the topology of more levels (planes) of interconnects may be expanded based on the system 70D of FIG. 7D, the number of packages per level (plane) may be expanded to larger values, and the number of chips within each package may be larger.
Fig. 7E is a block diagram illustrating an exemplary system 70E according to an embodiment of the invention. System 70E is based on system 50 of FIG. 5, each package having dies D0 and D1, respectively, die D0 and D1 being interconnected by ZDI. In this way, the hardware resources of the sixteen chips in fig. 7E can be shared. In other examples, the topology of more levels (planes) of interconnects may be expanded based on the system 70E of FIG. 7E, the number of packages per level (plane) may be expanded to larger values, and the number of chips within each package may be larger.
FIG. 8 illustrates ZPI/ZDI communication architecture, according to an embodiment of the present invention. As shown in fig. 8, the interconnection interface 800 provides a bidirectional transmission channel between the devices Device0 and Device 1. The interconnect interface 800 is a full duplex design that allows simultaneous bi-directional transmission. In one aspect, the devices 0 and Device are two packages and the interconnect interface 800 is ZPI. In another aspect, the devices Device0 and Device1 are two dies and the interconnect interface 800 is a ZDI.
The Device0 sends out the packet signal 802 and the frequency signal 804 through the transmitter TX0 of the interconnect interface 800 to be received by the receiver RX0 of the Device1 coupled to the interconnect interface 800. Conversely, the Device1 may transmit the packet signal 806 as well as the frequency signal 808 through the transmitter TX1 of the interconnect interface 800 to be received by the receiver RX1 of the interconnect interface 800 at the Device 0.
Fig. 9A and 9B illustrate in waveform diagram an input/output protocol (I/O protocol) between a device and ZPI/ZDI. In fig. 9A and 9B, the data signal TX _ ENTRY of the source device is transmitted to the transmitter TX of the interconnection interface, transmitted to the receiver RX of the other end of the transmission line through the transmission line (phy) of the interconnection interface, received as the data signal RX _ ENTRY, and then delivered to the target device.
Fig. 9A shows a handshake communication between the source device and transmitter TX of ZPI/ZDI for the transmitter TX to obtain transmission content (TX transmission sequence) from the source device.
When the signal READY/ACTIVE is pulled up, it indicates that ZPI/ZDI is indeed established. At a frequency (CLK) T0, the source device pulls the TX _ REQ signal and the transmitter TX pull signal TX _ ACK signals the TX _ ENTRY transmission from the source device to the transmitter TX in response. At frequency T1, the signals TX _ REQ and TX _ ACK are dropped, temporarily not pulling data from the source device. At frequencies T2-T3, the source device pulls the TX _ REQ signal, but the transmitter TX has not pulled the TX _ ACK signal; indicating that the source device is ready to take data signals, but that the transmitter TX has not yet taken data from the source device. At frequency T4, the signals TX _ REQ and TX _ ACK are both pulled up and the data signal TX _ ENTRY from the source device is delivered to the transmitter TX. The transmitter TX succeeds in taking the data signal from the source device. At frequency T5, the TX _ REQ and TX _ ACK conditions are at frequency T1, which informs the source device of the end of the data transfer to ZPI/ZDI. At frequencies T6-T7, the transmitter TX is ready to fetch data from the source device (pull signal TX _ ACK), but the source device does not (drop signal TX _ REQ).
Fig. 9B shows ZPI/ZDI handshake communication between the receiver RX and the target device, so that the transmission (RX transmission sequence) received by the receiver RX can be delivered to the target device.
When the signal READY/ACTIVE is pulled up, it indicates that ZPI/ZDI is indeed established. Receiver RX pull signal RX _ REQ at frequency T0, ZPI/ZDI and target device pull signal RX _ ACK in response, informing the passage of data signal RX _ ENTRY from receiver RX to the target device. At frequency T1, signals RX _ REQ and RX _ ACK are dropped, and no data is yet extracted from receiver RX. At frequency T2, the receiver RX pulls the signal RX _ REQ, but the target device has not pulled the signal RX _ ACK; indicating that the receiver RX is ready for data signals, but the target device has not yet received data from the receiver RX. At frequency T3, signals RX _ REQ and RX _ ACK are both pulled up, and the data signal RX _ ENTRY taken by receiver RX from ZPI/ZDI transmission line is delivered to the target device; the target device successfully acquires the data signal from the receiver RX. At frequency T4, the RX _ REQ and RX _ ACK conditions are the same as at frequency T1, which informs ZPI/ZDI that the data transfer to the target device is complete. At frequencies T5-T6, the target device prepares data from the receiver RX (pull signal RX _ ACK), but the receiver RX does not have data (drop signal RX _ REQ). At frequency T7, the RX _ REQ and RX _ ACK signals are pulled up to again notify the receiver RX _ ENTRY of the data signal RX _ ENTRY to the target device. However, the target device may have a mechanism to deny receiving data (e.g., the target device may deny receiving ZPI/ZDI data, possibly considering its buffer status, or otherwise). At frequencies T8, T9, the target device pull signal RX _ BNT requests blocking of transmitted data, and the receiver RX responds to the pull signal RX _ ACK, indicating that the blocking request is acknowledged.
Fig. 10A and 10B illustrate packet transmission paths and hardware structures of ZPI between package sockets 0 and 1 according to an embodiment of the present invention. As shown, the package socket0 is connected to the transmitter TX of ZPI, so that the data signal of the package socket0 is transmitted to the electrical physical layer EPHY of ZPI via the transmitter TX, received by the receiver RX of ZPI, and transferred to the package socket1 connected to the other end of ZPI. The full duplex reverse path (package socket 1-socket 0) is also so designed. ZPI is generated by a phase locked loop PLL and a frequency generator CLKgen.
Fig. 10A particularly details the transmitter TX. The package socket0 delivers multiple types of data to the arbiter TXARB in the transmitter TX for arbitration via different N channels CH 1-CHN. The data winning arbitration is compressed by the data compressor DataComp and then forwarded to the packet generator PacketGen for packet generation. When the package socket0 has no data transmission requirement, ZPI may generate a null packet by itself using the dummy packet generator FlitGen, wherein the dummy content is filled. Then, the packets are transmitted via the EPHY layer to the receiver RX at the other end of ZPI through the PtoS, and then are provided to one of the channels CH1 CHN in the package socket1 according to the data type.
ZPI may be implemented in a pipelined (pipeline) design. For transmitter TX, when packet generator PacketGen packetizes the first data, data compressor DataComp is compressing the second data, and arbiter TXARB is arbitrating the third data.
The transmitter TX may buffer the transmission data with a buffer RetryBuf. If the receiver RX finds that there is a data error from the phy, the retransmission mechanism is activated. The retransmission controller RetryCon will take out the data that failed transmission from the buffer RetryBuf, and the packet generator PacketGen will repackage the data into a packet for retransmission. In one implementation, the RX side of the package socket1 issues a retry request to the TX side of the package socket1 (the TX side of the socket1 is not shown in fig. 10A), and the TX side of the driver package socket1 also has a retry controller RetryCon. Referring to fig. 8, TX0 and RX1 in the figure belong to socket0, TX1 and RX0 belong to socket1, and after RX0 fails to check data, the packet 1 may request a retransmission request, which is transmitted from TX1 of socket1, and RX1 via EPHY and socket0 to packet 0, and TX0 of packet 0 drives a retransmission controller RetryCon to fetch the contents of the previous transmission buffer in a RetryBuf for retransmission.
A state machine LTSSM is also shown at ZPI for controlling the transmission rate of ZPI. In one embodiment, the state machine LTSSM can switch to a reduced speed state, pause the arbiter TXARB and the data compressor DataComp so that no more data is provided to the packet generator PacketGen wrapper. In addition, the state machine LTSSM can further control the transmission rate of EPHY at EPHY layer, achieving ZPI speed reduction. The state machine LTSSM can be switched from the package socket0 to the down state in response to a low power requirement, and the transmission rate is reduced ZPI. In another embodiment, the receiver RX may be jammed (e.g., register full in the receiver RX), and the package socket1 may send a speed reduction request to the package socket0 (not shown to the transmission hardware in the other direction in the figure via ZPI) so that the package socket0 requests the state machine LTSSM to switch ZPI transmission rate. Referring to FIG. 8, when the register in the receiver RX is full, the package socket1 issues a speed reduction request from the transmitter TX1, the EPHY layer and the receiver RX1 to the package socket0, and the state machine LTSSM is switched to the speed reduction state by the package socket 0. In one embodiment, the PtoS converter includes a buffer to handle speed reduction. For example, if the LTSSM reduces the EPHY rate, the buffer is used to buffer data that cannot be transferred to the other side.
Fig. 10B illustrates the details of the receiver RX in particular. The packet received from the EPHY of the electric physical layer is converted by a serial-to-parallel converter StoP, then is sent to a decoder FlitDec for packet decomposition, and then is checked by a check logic. The check logic may be based on a cyclic redundancy check, CRC, code and a channel coding, FEC. If the check fails, the receiver RX skips the received data and triggers the retransmission mechanism. If the verification is passed, the received data is rearranged by a data rearrangement module dataRea (data rearrangement), and then distributed to corresponding channels CH 1-CHN in the package socket1 by an analysis module Rxanls, so that ZPI transmission from the package socket0 to the package socket1 is completed. The hardware of the receiver RX may also be designed with a pipeline (pipeline). When the analysis module RXanls analyzes the first data, the data rearrangement module DataRea rearranges the second data to be checked, and the decoder FlitDec unpacks the third data.
FIGS. 11A and 11B illustrate packet transmission paths and hardware structures of ZDI between Die0 and Die1 according to embodiments of the invention. Compared to the details of the transmitter TX of ZPI of fig. 10A, the ZDI transmitter TX of fig. 11A does not have a data compressor DataComp. Compared to the details of the receiver RX of ZPI of fig. 10B, the ZDI receiver RX of fig. 11B does not have the data reordering module datareal. The parity check retransmission and state machine slow down mechanism can be the same as the design of FIGS. 10A and 10B.
There are two types of packets transmitted between encapsulations via ZPI, one of which is referred to as a first packet and the other as a second packet in the following description. The packet transmitted between the wafers through the ZDI is referred to as a third packet. The formats of the first, second, and third packets will be described below with reference to the drawings.
Fig. 12A is a diagram illustrating a format of a first packet 1200A according to an embodiment of the invention. As shown in FIG. 12A, a first packet 1000A contains a header 120, ZPI information 121, and a Cyclic Redundancy Check (CRC) 122.
In the first packet 1200A, the ZPI message 121 is used to establish communications between the two packages that conform to the communication protocol of ZPI interconnect interface, such as the handshake communications shown in fig. 9A and 9B. ZPI the number of bits occupied by the information 121 may be a fixed value. The header 120 is used to indicate ZPI the attributes of the message 121. The crc 122 is used to verify ZPI the correctness of the message 121.
Fig. 12B is a diagram illustrating a format of a second packet 1200B according to an embodiment of the invention. As shown in fig. 12B, the second packet 1200B contains a header 123, a data payload 124, and a cyclic redundancy check code 125.
In the second packet 1200B, the data payload 124 is loaded with data transmitted from one package to another package, such as data requested by and fed back from a CPU core in the package accessing a hardware resource of the other package, or data required to maintain cache coherency between packages. The header 120 is used to indicate the attributes of the data payload 124. The crc 125 is used to verify the correctness of the data payload 124. The number of bits occupied by the data payload 124 may not be a fixed value but may vary depending on the congestion level of ZPI. Specifically, when ZPI is congested, the number of bits occupied by the data payload 124 is large, and the bandwidth utilization is improved. When ZPI is relatively congested, the data payload 124 occupies a smaller number of bits and the transmission delay is reduced. The congestion level is determined according to the data amount in the TX buffer of the sender ZPI. I.e. the size of the data packet is arbitrated according to the amount of data that is persisted in the buffer space. The more data there is, i.e. the more congestion, the more heavily the package is loaded.
Fig. 13 is a diagram illustrating a format of a third packet 1300 according to an embodiment of the invention. As shown in fig. 13, the third packet 1300 contains a header 130, a data payload 131, ZDI information, and a cyclic redundancy check code 133.
In the third packet 1300, the data payload 131 is the data that is loaded from one chip and transferred to another chip, such as the data that is requested by the CPU core in the previous chip when accessing the hardware resources of the other chip and the data that is received as feedback. The ZDI information 132 is used to establish communication between the two packages according to the communication protocol of the ZDI interconnect interface, such as the handshake communication shown in fig. 9A and 9B. The number of bits occupied by the data payload 131 and the number of bits occupied by the ZDI information 132 may be fixed. Header 130 is used to identify the attributes of data payload 131 and ZDI information 132. The crc 133 is used to verify the correctness of the data payload 131 and the ZDI information 132.
The header 123 in fig. 12B and the header 130 in fig. 13 can indicate the attribute of the transmitted data payload with 5 bits, for example (the invention is not limited thereto). Examples of attributes for various header aspects corresponding to data loads are shown below.
< TABLE I >
Figure BDA0003284534950000141
Generally, packets transmitted between packages through ZPI and between wafers through ZDI may have the following two differences. For one, the packets transmitted between the encapsulants through ZPI are data encoded and transmitted separately from the data payload and ZPI information, such as a first packet 1200A in fig. 12A and a second packet 1200B in fig. 12B; the packets transmitted between the dies via ZDI encode and transmit data payload along with ZPI information, as in the third packet in fig. 13. Second, the number of bits occupied by the data payload in the packets transmitted between chips through ZDI may be a fixed value, while the data payload in the packets transmitted between packages through ZPI may vary depending on the congestion level of ZPI. Therefore, when ZPI is congested, the bandwidth utilization rate can be improved; when ZPI is relatively out of congestion, transmission delays are reduced.
The numbers, such as "first", "second", etc., in this description and in the claims are for convenience only and do not have a sequential relationship with each other.
The above paragraphs have been described in a number of versions. It should be apparent that the teachings herein may be implemented in a wide variety of ways and that any specific architecture or functionality disclosed in the examples is merely representative. It will be appreciated by those of ordinary skill in the art that each of the aspects disclosed herein can be implemented independently or in combination in a variety of ways, all in accordance with the teachings herein.
Although the present disclosure has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the disclosure, and therefore, the scope of the invention is to be defined by the appended claims.

Claims (23)

1. An interconnect system including a plurality of packages and a first interconnect interface, any two of the packages for accessing each other's hardware resources by transmitting a first packet and a second packet through the first interconnect interface, the interconnect system comprising;
the first packet includes a first interconnection message for establishing communication between any two of the packages; and
the second packet includes a first data payload loaded from one of the two packages;
the first package and the second package are connected through the first interconnection interface.
2. The interconnect system of claim 1, wherein the first packet further comprises:
a first header for indicating an attribute of the first interconnection information; and
a first check code for checking the correctness of the first interconnection information.
3. The interconnect system of claim 1, wherein the number of bits occupied by the first interconnect information is fixed.
4. The interconnect system of claim 1, wherein the second packet further comprises:
a second header for indicating the attribute of the first data payload;
a second check code for checking the correctness of the first data payload.
5. The interconnect system of claim 1, wherein the first data payload has a greater number of bits when the first interconnect interface is congested than when the first interconnect interface is not congested.
6. The interconnect system of claim 1, wherein each of the packages includes a plurality of dies and a second interconnect interface, any two of the dies having access to each other's hardware resources by transmitting a third packet through the second interconnect interface; and
wherein the chips include a first chip and a second chip, the first chip and the second chip being configured to be connected to each other through the second interconnection interface; and
wherein the third packet comprises:
a second interconnection message for establishing communication between the two chips; and
a second data payload loaded from one of the two chips.
7. The interconnect system of claim 6, wherein the third packet comprises:
a third header for indicating the second data payload and the attributes of the second interconnect information;
a third check code for checking the correctness of the second data payload and the second interconnection information.
8. The interconnect system of claim 6, wherein the number of bits occupied by the second data payload and the number of bits occupied by the second interconnect information are fixed.
9. The interconnect system of claim 1, wherein the hardware resource comprises a last level cache; and
wherein the first and second packets are configured to maintain cache coherency between the last level caches of the any two packages.
10. The interconnect system of claim 1, wherein the packages further include a third package, wherein the first package and the third package are configured to be interconnected by the first interconnect interface, and the second package and the third package are configured to be interconnected by the first interconnect interface.
11. The interconnect system of claim 1, the packages further comprising a third package and a fourth package, wherein the first package and the third package are configured to be interconnected by the first interconnect interface, the second package and the fourth package are configured to be interconnected by the first interconnect interface;
the first package, the second package, the third package and the fourth package are located on a first plane.
12. The interconnect system of claim 11, wherein the first and fourth packages are further configured to be interconnected by the first interconnect interface, the second and third packages being further configured to be interconnected by the first interconnect interface.
13. The interconnect system of claim 11, the packages further comprising a fifth package and a sixth package, wherein the first package and the fifth package are configured to be interconnected by the first interconnect interface, the fourth package and the fifth package are configured to be interconnected by the first interconnect interface, the fifth package and the sixth package are configured to be interconnected by the first interconnect interface, the second package and the sixth package are configured to be interconnected by the first interconnect interface, the third package and the sixth package are configured to be interconnected by the first interconnect interface;
wherein the fifth package is located on a second plane and the sixth package is located on a third plane; and
the first plane, the second plane and the third plane are parallel to each other, and the first plane is between the second plane and the third plane.
14. The interconnect system of claim 11, the packages comprising a fifth package, a sixth package, a seventh package, and an eighth package, wherein the fifth package and the sixth package are arranged to be interconnected through the first interconnect interface, the fifth package and the seventh package are arranged to be interconnected through the first interconnect interface, the sixth package and the eighth package are configured to be interconnected by the first interconnect interface, the seventh package and the eighth package are configured to be interconnected by the first interconnect interface, the first package and the fifth package being arranged to be interconnected via the first interconnect interface, the second package and the sixth package being arranged to be interconnected via the first interconnect interface, the third package and the seventh package are configured to be interconnected by the first interconnect interface, and the fourth package and the eighth package are configured to be interconnected by the first interconnect interface; and
the fifth package, the sixth package, the seventh package and the eighth package are all located on a second plane, and the second plane is parallel to the first plane.
15. An interconnect system including a plurality of modules and a first interconnect interface, any two of the modules for accessing each other's hardware resources by transmitting a first packet and a second packet through the first interconnect interface, the interconnect system comprising:
the first module includes a first interconnection message for establishing communication between any two modules; and
the second module comprises a first data load loaded from one of the two modules;
the modules include a first module and a second module, the first module and the second module are connected through the first interconnection interface, and the modules are chips or core particles.
16. The interconnect system of claim 15, wherein the first packet further comprises:
a first header for indicating an attribute of the first interconnection information; and
a first check code for checking the correctness of the first interconnection information,
wherein the second packet further comprises:
a second header for indicating the attribute of the first data payload; and
a second check code for checking the correctness of the first data payload.
17. The interconnect system of claim 15, wherein a number of bits occupied by the first interconnect information is fixed, and wherein a number of bits occupied by the first data payload when the first interconnect interface is congested is greater than a number of bits occupied by the first data payload when the first interconnect interface is not congested.
18. The interconnect system of claim 15, wherein the hardware resource comprises a last level cache; and
the first and second packets are used to maintain cache coherency between the last level caches of the any two modules.
19. The interconnect system of claim 15, wherein the modules further include a third module, wherein the first module and the third module are configured to be interconnected by the first interconnect interface, and the second module and the third module are configured to be interconnected by the first interconnect interface.
20. The interconnect system of claim 15, the modules further comprising a third module and a fourth module, wherein the first module and the third module are configured to be interconnected by the first interconnect interface, and the second module and the fourth module are configured to be interconnected by the first interconnect interface;
the first module, the second module, the third module and the fourth module are all located on a first plane.
21. The interconnect system of claim 20, wherein the first and fourth modules are further configured to be interconnected by the first interconnect interface, the second and third modules being further configured to be interconnected by the first interconnect interface.
22. The interconnect system of claim 20, the modules further comprising a fifth module and a sixth module, wherein the first module and the fifth module are configured to be interconnected by the first interconnect interface, the fourth module and the fifth module are configured to be interconnected by the first interconnect interface, the fifth module and the sixth module are configured to be interconnected by the first interconnect interface, the second module and the sixth module are configured to be interconnected by the first interconnect interface, the third module and the sixth module are configured to be interconnected by the first interconnect interface;
the fifth module is located on a second plane, and the sixth module is located on a third plane; and
the first plane, the second plane and the third plane are parallel to each other, and the first plane is between the second plane and the third plane.
23. The interconnect system of claim 20, the modules further comprising a fifth module, a sixth module, a seventh module, and an eighth module, wherein the fifth module and the sixth module are arranged to be interconnected via the first interconnect interface, the fifth module and the seventh module are arranged to be interconnected via the first interconnect interface, the sixth module and the eighth module being arranged to be interconnected via the first interconnect interface, the seventh module and the eighth module being arranged to be interconnected via the first interconnect interface, the first module and the fifth module being arranged to be interconnected via the first interconnect interface, the second module and the sixth module being arranged to be interconnected via the first interconnect interface, the third module and the seventh module being arranged to be interconnected via the first interconnect interface, the fourth module and the eighth module being arranged to be interconnected via the first interconnect interface; and
the fifth module, the sixth module, the seventh module and the eighth module are all located on a second plane, and the second plane is parallel to the first plane.
CN202111142579.1A 2021-09-28 2021-09-28 Interconnection system Pending CN113868171A (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN202111142579.1A CN113868171A (en) 2021-09-28 2021-09-28 Interconnection system
US17/506,124 US11853250B2 (en) 2021-09-28 2021-10-20 Interconnect interface
US17/506,144 US11675729B2 (en) 2021-09-28 2021-10-20 Electronic device and operation method of sleep mode thereof
US17/511,800 US11526460B1 (en) 2021-09-28 2021-10-27 Multi-chip processing system and method for adding routing path information into headers of packets
US17/523,049 US12001375B2 (en) 2021-09-28 2021-11-10 Interconnect system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111142579.1A CN113868171A (en) 2021-09-28 2021-09-28 Interconnection system

Publications (1)

Publication Number Publication Date
CN113868171A true CN113868171A (en) 2021-12-31

Family

ID=78991890

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111142579.1A Pending CN113868171A (en) 2021-09-28 2021-09-28 Interconnection system

Country Status (1)

Country Link
CN (1) CN113868171A (en)

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050034049A1 (en) * 2003-08-05 2005-02-10 Newisys, Inc. Communication between multi-processor clusters of multi-cluster computer systems
US20050034048A1 (en) * 2003-08-05 2005-02-10 Newisys, Inc. Reliable communication between multi-processor clusters of multi-cluster computer systems
CN1702858A (en) * 2004-05-28 2005-11-30 英特尔公司 Multiprocessor chip with bidirectional ring interconnection
EP1783604A2 (en) * 2005-11-07 2007-05-09 Slawomir Adam Janczewski Object-oriented, parallel language, method of programming and multi-processor computer
US20090019206A1 (en) * 2007-07-11 2009-01-15 Yehiel Engel Systems and Methods for Efficient Handling of Data Traffic and Processing Within a Processing Device
US20120106228A1 (en) * 2010-11-03 2012-05-03 Netlist, Inc. Method and apparatus for optimizing driver load in a memory package
CN103176925A (en) * 2011-12-20 2013-06-26 宏碁股份有限公司 Apparatus, system, and method for analyzing and managing data flow of interface apparatuses
TW201337372A (en) * 2012-01-31 2013-09-16 Hewlett Packard Development Co Hybrid electro-optical package for an opto-electronic engine
CN104683249A (en) * 2015-02-26 2015-06-03 浪潮电子信息产业股份有限公司 Independent configurable interconnection module implementing method for multi-chip interconnection system
CN105740178A (en) * 2014-12-09 2016-07-06 扬智科技股份有限公司 Chip network system and formation method therefor
GB201816930D0 (en) * 2017-10-20 2018-11-28 Graphcore Ltd Sending data off-chip
US10339059B1 (en) * 2013-04-08 2019-07-02 Mellanoz Technologeis, Ltd. Global socket to socket cache coherence architecture
CN111343519A (en) * 2020-02-24 2020-06-26 桂林电子科技大学 Photoelectric interconnection network architecture and data transmission method
CN112583540A (en) * 2018-01-08 2021-03-30 英特尔公司 Crosstalk generation in multi-channel links during channel testing

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050034049A1 (en) * 2003-08-05 2005-02-10 Newisys, Inc. Communication between multi-processor clusters of multi-cluster computer systems
US20050034048A1 (en) * 2003-08-05 2005-02-10 Newisys, Inc. Reliable communication between multi-processor clusters of multi-cluster computer systems
CN1702858A (en) * 2004-05-28 2005-11-30 英特尔公司 Multiprocessor chip with bidirectional ring interconnection
EP1783604A2 (en) * 2005-11-07 2007-05-09 Slawomir Adam Janczewski Object-oriented, parallel language, method of programming and multi-processor computer
US20090019206A1 (en) * 2007-07-11 2009-01-15 Yehiel Engel Systems and Methods for Efficient Handling of Data Traffic and Processing Within a Processing Device
US20120106228A1 (en) * 2010-11-03 2012-05-03 Netlist, Inc. Method and apparatus for optimizing driver load in a memory package
CN103176925A (en) * 2011-12-20 2013-06-26 宏碁股份有限公司 Apparatus, system, and method for analyzing and managing data flow of interface apparatuses
TW201337372A (en) * 2012-01-31 2013-09-16 Hewlett Packard Development Co Hybrid electro-optical package for an opto-electronic engine
US10339059B1 (en) * 2013-04-08 2019-07-02 Mellanoz Technologeis, Ltd. Global socket to socket cache coherence architecture
CN105740178A (en) * 2014-12-09 2016-07-06 扬智科技股份有限公司 Chip network system and formation method therefor
CN104683249A (en) * 2015-02-26 2015-06-03 浪潮电子信息产业股份有限公司 Independent configurable interconnection module implementing method for multi-chip interconnection system
GB201816930D0 (en) * 2017-10-20 2018-11-28 Graphcore Ltd Sending data off-chip
CN112583540A (en) * 2018-01-08 2021-03-30 英特尔公司 Crosstalk generation in multi-channel links during channel testing
CN111343519A (en) * 2020-02-24 2020-06-26 桂林电子科技大学 Photoelectric interconnection network architecture and data transmission method

Similar Documents

Publication Publication Date Title
US10084692B2 (en) Streaming bridge design with host interfaces and network on chip (NoC) layers
US11580054B2 (en) Scalable network-on-chip for high-bandwidth memory
KR101727874B1 (en) Method, apparatus and system for qos within high performance fabrics
KR100687659B1 (en) Network interface of controlling lock operation in accordance with axi protocol, packet data communication on-chip interconnect system of including the network interface, and method of operating the network interface
US20150103822A1 (en) Noc interface protocol adaptive to varied host interface protocols
US8014401B2 (en) Electronic device and method of communication resource allocation
JP2006502487A (en) Integrated circuit and method for exchanging data
JP2017506378A (en) Method and system for flexible credit exchange in high performance fabric
WO2022166427A1 (en) Data transmission events for use in interconnection die
WO2014037916A2 (en) Method and apparatus for transferring packets between interface control modules of line cards
US11954059B2 (en) Signal processing chip and signal processing system
CN114185840A (en) Three-dimensional multi-bare-chip interconnection network structure
CN117331881A (en) Data transmission system suitable for aerospace chip interconnection protocol
US20210297361A1 (en) Method and system for robust streaming of data
US11853250B2 (en) Interconnect interface
CN113868171A (en) Interconnection system
US12001375B2 (en) Interconnect system
Matveeva et al. QoS support in embedded networks and NoC
Salazar-García et al. PlasticNet: A low latency flexible network architecture for interconnected multi-FPGA systems
CN113868172A (en) Interconnection interface
US20220405223A1 (en) Method and system for data transactions on a communications interface
CN117795913A (en) Data bus inversion using multiple transforms
CN117453609B (en) Multi-kernel software program configuration method and device, electronic equipment and storage medium
CN112835847B (en) Distributed interrupt transmission method and system for interconnected bare core
US20230080284A1 (en) Devices using chiplet based storage architectures

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 301, 2537 Jinke Road, Zhangjiang High Tech Park, Pudong New Area, Shanghai 201203

Applicant after: Shanghai Zhaoxin Semiconductor Co.,Ltd.

Address before: Room 301, 2537 Jinke Road, Zhangjiang hi tech park, Shanghai 201203

Applicant before: VIA ALLIANCE SEMICONDUCTOR Co.,Ltd.