CN215601334U - 3D-IC baseband chip and stacked chip - Google Patents

3D-IC baseband chip and stacked chip Download PDF

Info

Publication number
CN215601334U
CN215601334U CN202122497900.XU CN202122497900U CN215601334U CN 215601334 U CN215601334 U CN 215601334U CN 202122497900 U CN202122497900 U CN 202122497900U CN 215601334 U CN215601334 U CN 215601334U
Authority
CN
China
Prior art keywords
storage
network node
network
array
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202122497900.XU
Other languages
Chinese (zh)
Inventor
周小锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Ziguang Guoxin Semiconductor Co ltd
Original Assignee
Xian Unilc Semiconductors Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Unilc Semiconductors Co Ltd filed Critical Xian Unilc Semiconductors Co Ltd
Priority to CN202122497900.XU priority Critical patent/CN215601334U/en
Application granted granted Critical
Publication of CN215601334U publication Critical patent/CN215601334U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Semiconductor Memories (AREA)

Abstract

The utility model discloses a 3D-IC baseband chip and a stacked chip.A network topological structure is constructed in a logic unit to connect each network node, so that the communication among each network node does not depend on bus arbitration any more, but utilizes the communication among the nodes, each network node can establish respective communication paths in parallel in the network topological structure to access storage arrays corresponding to other network nodes without mutual interference and queue, and the data processing efficiency can be improved. In addition, the storage unit is integrated in the chip and is divided into a plurality of storage arrays, and the network node can directly access the storage array corresponding to the network node through the corresponding salient point array to perform access operation, so that the efficiency of accessing data from the storage unit is improved, and the computing performance of the whole 3D-IC baseband chip is improved.

Description

3D-IC baseband chip and stacked chip
Technical Field
The application relates to the technical field of chips, in particular to a 3D-IC baseband chip and a stacking chip.
Background
In the prior art, the baseband chip is generally a lumped baseband chip. For example, in fig. 1, the CPU110, the soft core array 120, the accelerator 130, and other computing units are interconnected by buses in a lump inside the chip, and a discrete memory (not shown in the figure) is used outside the chip to store data.
Based on the structural design, if a plurality of computing units in the chip access the memory outside the chip at the same time, the memory is necessarily queued at the bus and can be accessed in sequence after the bus arbitration, so that the data processing efficiency of each computing unit is reduced, and the performance of the baseband chip is poor. In addition, the use of off-chip discrete memory provides inefficient access to data with high latency, further limiting the performance of the baseband chip.
SUMMERY OF THE UTILITY MODEL
The utility model provides a 3D-IC baseband chip and a stacked chip, which are used for solving the technical problem of poor performance of the baseband chip in the prior art due to low data processing efficiency.
According to a first aspect of the present invention, there is provided a 3D-IC baseband chip comprising: the storage unit comprises a plurality of storage arrays, and each storage array is provided with a bump array;
the logic unit comprises a plurality of routing nodes and a plurality of network nodes, the routing nodes are interconnected to form a network topology structure, each routing node is correspondingly connected with one network node, and the network nodes are connected with the corresponding storage arrays through the corresponding bump arrays.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention, the logic unit further comprises a memory controller, the memory controller being configured to control at least a part of the memory array of the memory unit, the memory controller being connected to at least a part of the routing node and/or at least a part of the network node, at least a part of the network node sharing a same memory controller for memory access to at least a part of the memory array.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention, the logic unit further includes a plurality of storage controllers, one storage controller is connected to each routing node and/or each network node, and the network nodes respectively use the storage controllers respectively corresponding to the routing nodes to store and access the storage arrays correspondingly controlled by the storage controllers.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention, the logic unit further comprises: a buffer connected to the memory cells through a corresponding bump array, the buffer configured to convert a voltage of the memory cells to a voltage of the logic cells; or converting the voltage of the logic cell into the voltage of the memory cell.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention, the network node is one of: soft core, accelerator, soft core cluster, accelerator cluster.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention, any one of the network nodes stores and accesses the storage arrays corresponding to the other network nodes through the routing node connected to itself and the routing nodes connected to the other network nodes; or
And any network node stores and accesses at least one of a soft core, an accelerator, a soft core cluster and an accelerator cluster corresponding to the rest network nodes through the routing node connected with the network node and the routing nodes connected with the rest network nodes.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention, the memory unit includes: DRAM cells and NVM cells;
the network node is connected with the storage array corresponding to the DRAM unit through a first bump array, and the network node is connected with the storage array corresponding to the NVM unit through a second bump array;
the storage controller includes: the DRAM controller controls the storage array corresponding to the DRAM unit, and the NVM controller controls the storage array corresponding to the NVM unit.
According to a preferred embodiment of the 3D-IC baseband chip of the present invention,
the network node utilizes the corresponding DRAM controller to store and access the storage array in the DRAM unit correspondingly controlled by the DRAM controller;
and the network node utilizes the corresponding NVM controller to store and access the storage array in the NVM unit correspondingly controlled by the NVM controller.
In a second aspect of the utility model, there is provided a stacked chip comprising the 3D-IC baseband chip of any of the above aspects;
and the processor is connected with the 3D-IC baseband chip in a three-dimensional stacking mode.
Through one or more technical schemes of the utility model, the utility model has the following beneficial effects or advantages:
the utility model provides a 3D-IC baseband chip and a stacked chip, wherein a network topological structure is constructed in a logic unit to connect each network node, so that the communication among the network nodes does not depend on bus arbitration any more, but utilizes the communication among the nodes, and each network node can establish respective communication path in parallel in the network topological structure to access a storage array corresponding to other network nodes without mutual interference or queue waiting, thereby improving the data processing efficiency. In addition, the storage unit is integrated in the chip and is divided into a plurality of storage arrays, and the network node can directly access the storage array corresponding to the network node through the corresponding salient point array to perform access operation, so that the efficiency of accessing data from the storage unit is improved, and the computing performance of the whole 3D-IC baseband chip is improved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the utility model. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 shows a schematic structure diagram of a lumped baseband chip in the prior art;
FIG. 2A shows a structure of a 3D-IC baseband chip according to an embodiment of the present invention;
FIG. 2B shows another structure of a 3D-IC baseband chip in the embodiment of the utility model;
FIG. 3A illustrates a network topology in a logical unit in an embodiment of the utility model;
FIG. 3B illustrates a combination of network nodes in a network topology according to an embodiment of the present invention;
FIG. 3C illustrates another combination of network nodes in a network topology according to an embodiment of the present invention;
FIG. 4A illustrates an architecture of a logic unit in an embodiment of the present invention based on the network topology of FIG. 3A;
FIG. 4B illustrates an alternative configuration of a logic unit in accordance with an embodiment of the present invention based on the network topology of FIG. 3A;
figure 4C shows a connection structure of different kinds of memory controllers in a logic unit according to an embodiment of the present invention,
fig. 4D shows a structure of a logic unit based on the network topology of fig. 3C in the embodiment of the present invention.
Description of reference numerals: CPU110, soft core array 120, accelerator 130, baseboard 200, logic unit 210, routing node 2101, network node 2102, soft core 21021, soft core cluster 21022, accelerator cluster 21023, memory controller 2103, DRAM controller 21031, NVM controller 20132, buffer 2104, memory unit 220, DRAM unit 2201, NVM unit 2202.
Detailed Description
In order to solve the technical problem of low data processing efficiency in the prior art, the utility model provides a 3D-IC baseband chip and a stacked chip. Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. It is to be understood that such description is merely illustrative and not intended to limit the scope of the present invention. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present invention.
Various structural schematics according to embodiments of the present invention are shown in the figures. The figures are not drawn to scale, wherein certain details are exaggerated and possibly omitted for clarity of presentation. The shapes of various regions, layers, and relative sizes and positional relationships therebetween shown in the drawings are merely exemplary, and deviations may occur in practice due to manufacturing tolerances or technical limitations, and a person skilled in the art may additionally design regions/layers having different shapes, sizes, relative positions, as actually required.
In the context of the present invention, when a layer/element is referred to as being "on" another layer/element, it can be directly on the other layer/element or intervening layers/elements may be present. In addition, if a layer/element is "on" another layer/element in one orientation, then that layer/element may be "under" the other layer/element when the orientation is reversed.
In the above description, the technical details of patterning, etching, and the like of each layer are not described in detail. It will be appreciated by those skilled in the art that layers, regions, etc. of the desired shape may be formed by various technical means. In addition, in order to form the same structure, those skilled in the art can also design a method which is not exactly the same as the method described above. In addition, although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used in advantageous combination.
The technical solution of the present invention is further described in detail by the accompanying drawings and the specific embodiments.
The embodiment discloses a 3D-IC (three-dimensional integrated) baseband chip, and the processing bandwidth of the 3D-IC baseband chip is more than 1 TB/s. The 3D-IC baseband chip comprises: a logic unit 210 and a storage unit 220; the logic unit 210 is used for accessing data from the storage unit 220. The storage unit 220 is used for storing data. Fig. 2A shows a structure of the 3D-IC baseband chip of the present embodiment. In fig. 2A, the logic unit 210 is packaged on the substrate 200, and the storage unit 220 is stacked in the vertical direction of the logic unit 210. The logic unit 210 and the storage unit 220 exist in the form of "wafer" in the chip of the present embodiment. Further, the logic unit 210 and the memory unit 220 are stacked and integrated by bonding.
In the present embodiment, the memory unit 220 includes a plurality of memory arrays, such as two or more memory arrays. Each memory array is provided with a bump array. It is noted that the following embodiments of the present application relate to the description of quantities, both for the sake of example and for the sake of example. The logic unit 210 comprises a plurality of routing nodes 2101 and a plurality of network nodes 2102, wherein the plurality of routing nodes 2101 are interconnected to form a network topology structure, each routing node 2101 is correspondingly connected with one network node 2102, and the network nodes 2102 are connected with corresponding storage arrays through corresponding bump arrays. The network topology in the logic unit 210 is shown in fig. 3A-3C, which will not be described herein. Since the storage unit 220 is allocated with a corresponding storage array for each network node 2102 in the chip, the network nodes 2102 do not need to access data by accessing an off-chip memory, and any network node 2102 can directly access the corresponding storage array to perform data access operation through the corresponding bump array, thereby improving the efficiency of data access. In addition, on the basis of the network topology structure in the logic unit 210 of this embodiment, when any network node 2102 accesses a storage array other than itself, the storage array corresponding to the other network node 2102 may be accessed through the routing node 2101 connected to itself and the routing node 2101 connected to the other network node 2102 in the network topology structure. It can be seen that, in the 3D-IC baseband chip of this embodiment, each network node 2102 may directly access its corresponding storage array through the corresponding bump array, or may establish its own communication path in parallel in the network topology structure to access the storage arrays corresponding to other nodes, and the access between the network nodes 2102 is not interfered with each other, and does not need to wait in a queue.
As an alternative embodiment, the memory unit 220 in the 3D-IC baseband chip contains a plurality of memory types. Fig. 2B shows another structure of the 3D-IC baseband chip. In fig. 2B, the Memory unit 220 includes a DRAM (Dynamic Random Access Memory) unit 2201 and a NVM (Non-Volatile Memory) unit 2202, but does not constitute a limitation on the kind and number thereof. The DRAM cell 2201 and the NVM cell 2202 are stacked in the vertical direction of the logic unit 210, and the NVM cell 2202 passes Through the DRAM cell 2201 and is bonded on the logic unit 210 by a Through Silicon Via (TSV) technique. Specifically, the DRAM unit 2201 includes a plurality of corresponding memory arrays, and each memory array is provided with a first bump array. Correspondingly, the NVM cell 2202 includes a plurality of corresponding memory arrays, and each memory array has a second bump array disposed thereon. On this basis, DRAM cell 2201 and NVM cell 2202 are each assigned a corresponding memory array for each network node 2102. Therefore, the network node 2102 connects the memory array corresponding to the DRAM cell 2201 through the first bump array, and can directly access the memory array corresponding to the DRAM cell 2201 through the first bump array to perform a data access operation. Accordingly, the network node 2102 connects the memory array corresponding to the NVM unit 2202 through the second bump array, and can directly access the memory array corresponding to the NVM unit 2202 through the second bump array for data access operation. When the network node 2102 accesses the storage arrays corresponding to the remaining network nodes 2102, reference may be made to the related description of the foregoing embodiments, which is not described herein again.
The above embodiment describes a specific structure of the memory unit 220, and the following embodiment describes the logic unit 210.
Fig. 3A shows a network topology in the logic unit 210, which is formed by a plurality of routing nodes 2101 interconnected, and each routing node 2101 is connected to a corresponding network node 2102. Since network node 2102 is one of: the network nodes 2102 in the network topology structure have various combinations, which may be any combination of the soft core 21021, the accelerator 130, the soft core cluster 21022 and the accelerator cluster 21023. When performing communication, any network node 2102 stores and accesses at least one of the soft core 21021, the accelerator 130, the soft core cluster 21022 and the accelerator cluster 21023 corresponding to the other network node 2102 through the routing node 2101 connected to the network node 2102 and the routing node 2101 connected to the other network node 2102. Fig. 3B illustrates one combination of network nodes 2102 in a network topology, but is not intended to be limiting. In the first direction of the network topology, of two adjacent routing nodes 2101, one routing node 2101 is connected to a soft core 21021, and the other routing node 2101 is connected to an accelerator 130. The soft core 21021 and the accelerator 130 are spaced apart in a first direction. On the basis of the structure, any soft core 21021 (or any accelerator 130) can directly access the storage array corresponding to itself, and can also access the storage array corresponding to other soft cores 21021 (or other accelerators 130) through the routing node 2101 of itself and the routing nodes 2101 of other soft cores 21021 (or other accelerators 130). Of course, there are other combinations of network nodes 2102 in the network topology. For example, all routing nodes 2101 are correspondingly connected to respective soft cores 21021; or each routing node 2101 randomly connects to a soft core 21021 or accelerator 130, etc. The combination mode of the network node 2102 in the network topology structure of this embodiment is flexible and changeable, and can be selected according to actual situations. Fig. 3C illustrates another combination of network nodes 2102 in a network topology, but is not intended to be limiting. In a first direction in the network topology structure, in two adjacent routing nodes 2101, one routing node 2101 is connected with a soft core cluster 21022, the other routing node 2101 is connected with an accelerator cluster 21023, and the soft core cluster 21022 and the accelerator cluster 21023 are arranged at intervals in the first direction. Preferably, the soft cores 21021 in each soft core cluster 21022 are interconnected by a first bus, over which the soft cores 21021 in the soft core cluster 21022 can access the storage array. Similarly, the accelerators 130 in each accelerator cluster 21023 are interconnected by a second bus, over which the accelerators 130 in the accelerator cluster 21023 can access the memory array. Since each soft core 21021 (or each accelerator 130) under the soft core cluster 21022 (or the accelerator cluster 21023) belongs to a small-scale short-distance communication node, the use of a bus can ensure communication efficiency. Therefore, the communication mode combining the network topology structure and the bus provides flexible and various communication modes on the basis of ensuring the communication efficiency. Of course, there are other combinations of network nodes 2102 in the network topology. For example, all routing nodes 2101 are correspondingly connected to respective soft core clusters 21022; or in two adjacent routing nodes 2101, one routing node 2101 is connected with a soft core cluster 21022, and the other routing node 2101 is connected with a soft core 21021 or an accelerator 130. It should be noted that any combination of network nodes 2102 in a network topology is intended to be within the scope of the present invention.
The logic unit 210 further includes a memory controller 2103 and a buffer 2104.
The storage controller 2103 is "middleware" for the network node 2102 to access the storage arrays, the storage controller 2103 is configured to control at least part of the storage arrays in the storage unit 220, or when the logic unit 210 comprises a plurality of storage controllers 2103, each routing node 2101 and/or each said network node 2102 has a corresponding storage controller 2103, and the network node 2102 uses the storage controller 2103 to access the corresponding storage arrays. In this embodiment, the storage controller 2103 and the network node 2102 may have a "one-to-many" correspondence relationship or a "one-to-one" correspondence relationship. The correspondence of the storage controller 2103 and routing node 2101 is similar to that described above.
The storage controller 2103 is configured to control at least a portion of the storage array of the storage unit 220 when the storage controller 2103 corresponds to at least a portion of the routing node 2101 and/or at least a portion of the network node 2102. Specifically, the storage controller 2103 is connected to at least a part of the routing nodes 2101 and/or at least a part of the network nodes 2102, and the structural design can relieve the stress of laying out and routing on the premise of ensuring the computing performance of a chip. At least a portion of network nodes 2102 share the same storage controller 2103 for storage access to at least a portion of the storage array when accessing the storage array. Fig. 4A shows a structure of the logic unit 210 based on the network topology of fig. 3A, but not limiting, and illustrates, by way of example, each network node 2102 connected by the routing node 2101 in the last row in fig. 3A. In this configuration, the network nodes 2102 to which the routing node 2101 in the last row is connected are commonly connected to one storage controller 2103. The memory controller 2103 is connected to the buffer 2104 corresponding to each network node 2102, and the buffer 2104 corresponding to each network node 2102 is connected to the memory array corresponding to the memory unit 220 by the bump array.
When the storage controllers 2103 correspond to the routing nodes 2101 and/or the network nodes 2102 one by one, specifically, each routing node 2101 and/or each network node 2102 is connected to one storage controller 2103, so that the network nodes 2102 respectively use the corresponding storage controller 2103 to store the storage arrays correspondingly controlled by the storage controller 2103 when accessing the storage arrays. Fig. 4B shows another structure of the logic unit 210 based on the network topology of fig. 3A, but not limiting, and is illustrated by taking as an example each network node 2102 connected by the routing node 2101 in the last row in fig. 3A. In this configuration, one storage controller 2103 is connected to each network node 2102 to which a routing node 2101 in the last row is connected. The memory controller 2103 is connected to the buffer 2104 corresponding to each network node 2102, and the buffer 2104 corresponding to each network node 2102 is connected to the memory array corresponding to the memory unit 220 by the bump array.
It is noted that different types of memory units 220 correspond to different types of memory controllers 2103. Referring to fig. 4C, showing a connection structure of different kinds of memory controllers 2103 in the logic unit 210, the memory controllers 2103 include, on the basis of the structure in which the memory unit 220 includes a DRAM cell 2201 and an NVM cell 2202: DRAM controller 21031 and NVM controller 20132. DRAM controller 21031 controls the memory array corresponding to DRAM cell 2201 and NVM controller 20132 controls the memory array corresponding to NVM cell 2202.
Further, since different types of storage units 220 correspond to different types of storage controllers 2103, when accessing different types of storage units 220, the network node 2102 accesses the corresponding storage controllers 210. Specifically, network node 2102DRAM controller 21031 stores access to a memory array in DRAM cell 2201 that DRAM controller 21031 controls correspondingly. The network node 2102 uses the corresponding NVM controller 20132 to access the storage array in the NVM cell 2202 that the NVM controller 20132 correspondingly controls. As can be seen, since the network nodes 2102 do not interfere with each other when accessing the storage units 220, the storage units 220 can be accessed in parallel to process data, thereby improving the efficiency of data processing.
Since there is also a communication manner in which a network topology and a bus are combined in the logic unit 210, fig. 4D shows a structure of the logic unit 210 based on the network topology of fig. 3C, but not limiting, and it is illustrated by taking any one soft core cluster 21022 connected to the routing node 2101 in the last row in fig. 3C as an example, and the other soft core clusters 21022 or the accelerator cluster 21023 are similar. In this configuration, the soft core cluster 21022 includes three soft cores 21021, one soft core 21021 corresponding to each memory controller 2103. The router corresponding to the soft core cluster 21022, the three soft cores 21021 in the soft core cluster 21022, and the storage controllers 2103 corresponding to the three soft cores 21021 are interconnected through a first bus. Since memory controller 2103 includes DRAM controller 21031 and NVM controller 20132. Therefore, a soft core 21021 is connected to a DRAM controller 21031 and an NVM controller 20132 respectively. The DRAM controller 21031 is connected to the corresponding buffer 2104, and the buffer 2104 is connected to the memory array corresponding to the DRAM cell 2201 through the corresponding first bump array. The NVM controllers 20132 are connected to corresponding buffers 2104, and the buffers 2104 are connected to corresponding memory arrays of the NVM cells 2202 via corresponding second bump arrays.
The buffer 2104 is connected between the memory controller 2103 and the memory unit 220. Referring to fig. 4A-4D, in particular, the buffer 2104 is connected to the memory array in the memory unit 220 via a corresponding bump array. One buffer 2104 is connected to one memory array through a corresponding array of bumps. Specifically, based on the structure that the memory unit 220 includes the DRAM unit 2201 and the NVM unit 2202, the buffer 2104 is connected to the memory array corresponding to the DRAM unit 2201 through the first bump array, and the buffer 2104 is connected to the memory array corresponding to the DRAM unit 2201 through the second bump array. Since the voltages required by both memory cell 220 and logic cell 210 may be different, buffer 2104 has a role in voltage coordination. For example, the voltage of the memory cell 220 is converted into the voltage of the logic cell 210; or converts the voltage of the logic unit 210 into the voltage of the memory unit 220. The buffer 2104 adapts the cell voltages of the memory cell 220 and the logic cell 210 to each other to reduce the risk of burning out of the baseband chip.
The above is a description of the specific structure of the logic unit 210. In the present embodiment, each network node 2102 is connected by constructing a network topology in the logic unit 210, so that the communication between each network node 2102 does not depend on bus arbitration, but utilizes the communication between nodes. Therefore, after receiving the respective data processing requests, the network nodes 2102 can independently and concurrently construct respective communication paths in the network topology to access the storage arrays corresponding to the respective data processing requests. Further, since the storage controllers 2103 and the network nodes 2102 are in a one-to-one correspondence relationship, when each network node 2102 accesses the storage array corresponding to each data processing request, the corresponding storage controller 2103 can be used to independently access the corresponding storage array in parallel, so that the data processing efficiency is improved, and further, the computing performance of the 3D-IC baseband chip is improved.
Based on the same inventive concept as one or more of the above embodiments, the present embodiment further provides a stacked chip including the 3D-IC baseband chip and the processor described according to any of the above embodiments. The processor is connected with the three-dimensional stacking of the 3D-IC baseband chip described in any one of the above embodiments.
Based on the same utility model concept as one or more of the above embodiments, this embodiment introduces the implementation principle of the 3D-IC baseband chip described in any of the above embodiments, and the implementation principle is described in steps, including:
at step 501, each network node 2102 receives a respective data processing request.
Specifically, the data processing request specifically includes: requesting to perform read-write operation on a storage array corresponding to the network node 2102; or request to read from or write to the storage array corresponding to the other network node 2102. Each network node 2102 has different access patterns according to different data processing requests. Further, if all the data processing requests in each network node 2102 are requests to perform read/write operations on the storage array corresponding to the data processing request, step 502 is executed. If all the data processing requests in each network node 2102 are requests to perform read/write operations on the storage arrays corresponding to other network nodes 2102, step 503 is executed. Of course, if the data processing request part in each network node 2102 requests the read/write operation on the storage array corresponding to the data processing request part and the read/write operation on the storage array corresponding to the other network node 2102, the step 502 and the step 503 are executed in parallel.
In step 502, each network node 2102 accesses its corresponding storage array through its corresponding bump array based on its respective data processing request. It can be seen that, since the storage unit allocates a corresponding storage array to each network node 2102 in the chip, the bump array corresponding to each network node 2102 directly accesses the storage array corresponding to itself for access operation, so that the efficiency of accessing data from the storage unit is improved, and further, the computational performance of the entire 3D-IC baseband chip is improved.
In step 503, each network node 2102 establishes a communication path in the network topology to access the storage array corresponding to each data processing request based on each data processing request. Since the network topology changes the communication mode between the network nodes 2102, the communication between the network nodes 2102 does not depend on bus arbitration, but utilizes the communication between the nodes. Therefore, each network node 2102 can independently and parallelly establish a communication path to access the storage array corresponding to each data processing request in the network topology structure based on each data processing request, the access among the network nodes 2102 is not interfered with each other, and queuing is not needed, so that the data processing efficiency is improved, and the computing performance of the 3D-IC baseband chip is further improved.
Since the memory unit 220 of the 3D-IC baseband chip of the present embodiment includes a DRAM unit 2201 and an NVM unit 2202. And DRAM cell 2201 and NVM cell 2202, each assigned a corresponding memory array for each network node 2102. Since access to the memory array corresponding to different memory cells 220 is required, its corresponding memory controller 2103 is utilized, DRAM cell 2201 corresponds to DRAM controller 21031, and NVM cell 2202 corresponds to NVM controller 20132. Therefore, if the memory array corresponding to the data processing request belongs to the DRAM unit 2201, each network node 2102 accesses the memory array corresponding to the data processing request in the DRAM unit 2201 through the first bump array under the driving of the DRAM controller 21031; if the storage array corresponding to the data processing request belongs to the NVM unit 2202, each network node 2102 accesses the storage array corresponding to the data processing request in the NVM unit 2202 through the second bump array under the driving of the NVM controller 21032. As can be seen, since the network nodes 2102 do not interfere with each other when accessing the storage units 220, the storage units 220 can be accessed in parallel to process data, thereby improving the efficiency of data processing.
As an alternative embodiment, in the 3D-IC baseband chip, the memory controller 2103 and the network node 2102 may be in a "one-to-many" correspondence relationship or a "one-to-one" correspondence relationship. And the correspondence of the storage controller 2103 to the routing node 2101 is similar to that described above.
When the storage controller 2103 and the network node 2102 are in a "one-to-many" correspondence, at least a part of the network node 2102 is connected to the same storage controller 2103. On the basis of this structure, since the same storage controller 2103 needs to respond to data processing requests of at least some of the network nodes 2012 in each network node 2102, when at least some of the network nodes 2102 use the storage controller 2103, if the storage controller 2103 is in a busy state, the network nodes 2102 need to queue up. In particular implementations, at least a portion of network nodes 2102 may access the same storage controller 2103 directly and/or via respective communication paths based on respective data processing requests and queue the storage arrays corresponding to the respective data processing requests using the same storage controller 2103.
When the storage controller 2103 and the network nodes 2102 are in a "one-to-one" correspondence relationship, each network node 2102 is connected to the storage controller 2103. In this configuration, since the storage controllers 2103 and the network nodes 2102 are in a one-to-one correspondence relationship, when each network node 2102 reaches the storage controller 2103 corresponding to each data processing request directly and/or through each communication path based on each data processing request, each storage controller 2103 corresponding to each data processing request is used to independently access the storage array corresponding to each data processing request in parallel. As can be seen, in this embodiment, each network node 2102 can access the associated memory array by using the corresponding memory controller 2103, respectively, after establishing respective communication paths in parallel for communication, so that the 3D-IC baseband chip of this embodiment can support each network node 2102 to process data in parallel, thereby improving the data processing efficiency of each network node 2102 and further improving the computing performance of the 3D-IC baseband chip.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (9)

1. A 3D-IC baseband chip, comprising: the storage unit comprises a plurality of storage arrays, and each storage array is provided with a bump array;
the logic unit comprises a plurality of routing nodes and a plurality of network nodes, the routing nodes are interconnected to form a network topology structure, each routing node is correspondingly connected with one network node, and the network nodes are connected with the corresponding storage arrays through the corresponding bump arrays.
2. The 3D-IC baseband chip according to claim 1,
the logic unit further comprises a storage controller configured to control at least a portion of the storage array of the storage unit, the storage controller being coupled to at least a portion of the routing nodes and/or at least a portion of the network nodes, at least a portion of the network nodes sharing a same storage controller for storage access to at least a portion of the storage array.
3. The 3D-IC baseband chip according to claim 2,
the logic unit further comprises a plurality of storage controllers, each routing node and/or each network node is connected with one storage controller, and the network nodes respectively use the corresponding storage controllers to store and access the storage arrays correspondingly controlled by the storage controllers.
4. The 3D-IC baseband chip according to claim 2 or 3, wherein the logic unit further comprises: a buffer connected to the memory cells through a corresponding bump array, the buffer configured to convert a voltage of the memory cells to a voltage of the logic cells; or converting the voltage of the logic cell into the voltage of the memory cell.
5. The 3D-IC baseband chip according to claim 4, wherein the network node is one of: soft core, accelerator, soft core cluster, accelerator cluster.
6. The 3D-IC baseband chip according to claim 5, wherein any one of the network nodes stores and accesses the storage arrays corresponding to the rest of the network nodes through the routing node connected to itself and the routing nodes connected to the rest of the network nodes; or
And any network node stores and accesses at least one of a soft core, an accelerator, a soft core cluster and an accelerator cluster corresponding to the rest network nodes through the routing node connected with the network node and the routing nodes connected with the rest network nodes.
7. The 3D-IC baseband chip according to claim 6, wherein said storage unit comprises: DRAM cells and NVM cells;
the network node is connected with the storage array corresponding to the DRAM unit through a first bump array, and the network node is connected with the storage array corresponding to the NVM unit through a second bump array;
the storage controller includes: the DRAM controller controls the storage array corresponding to the DRAM unit, and the NVM controller controls the storage array corresponding to the NVM unit.
8. The 3D-IC baseband chip according to claim 7,
the network node utilizes the corresponding DRAM controller to store and access the storage array in the DRAM unit correspondingly controlled by the DRAM controller;
and the network node utilizes the corresponding NVM controller to store and access the storage array in the NVM unit correspondingly controlled by the NVM controller.
9. A stacked chip comprising a 3D-IC baseband chip according to any one of the preceding claims 1 to 8;
a processor connected to the three-dimensional stack of 3D-IC baseband chips of any of the preceding claims 1-8.
CN202122497900.XU 2021-10-15 2021-10-15 3D-IC baseband chip and stacked chip Active CN215601334U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202122497900.XU CN215601334U (en) 2021-10-15 2021-10-15 3D-IC baseband chip and stacked chip

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202122497900.XU CN215601334U (en) 2021-10-15 2021-10-15 3D-IC baseband chip and stacked chip

Publications (1)

Publication Number Publication Date
CN215601334U true CN215601334U (en) 2022-01-21

Family

ID=79870991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202122497900.XU Active CN215601334U (en) 2021-10-15 2021-10-15 3D-IC baseband chip and stacked chip

Country Status (1)

Country Link
CN (1) CN215601334U (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114497033A (en) * 2022-01-27 2022-05-13 上海燧原科技有限公司 Three-dimensional chip
CN114709205A (en) * 2022-06-02 2022-07-05 西安紫光国芯半导体有限公司 Three-dimensional stacked chip and data processing method thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114497033A (en) * 2022-01-27 2022-05-13 上海燧原科技有限公司 Three-dimensional chip
CN114709205A (en) * 2022-06-02 2022-07-05 西安紫光国芯半导体有限公司 Three-dimensional stacked chip and data processing method thereof
CN114709205B (en) * 2022-06-02 2022-09-09 西安紫光国芯半导体有限公司 Three-dimensional stacked chip and data processing method thereof

Similar Documents

Publication Publication Date Title
US11145384B2 (en) Memory devices and methods for managing error regions
CN215601334U (en) 3D-IC baseband chip and stacked chip
US9293170B2 (en) Configurable bandwidth memory devices and methods
JP7349812B2 (en) memory system
CN108459974A (en) The high bandwidth memory equipment of integrated flash memory
EP2887223A1 (en) Memory system, memory module, memory module access method and computer system
US20070067579A1 (en) Shared memory device
US20140310495A1 (en) Collective memory transfer devices and methods for multiple-core processors
WO2023030051A1 (en) Stacked chip
WO2018121118A1 (en) Calculating apparatus and method
WO2023030053A1 (en) Llc chip, cache system and method for reading and writing llc chip
CN102866980B (en) Network communication cell used for multi-core microprocessor on-chip interconnected network
CN108256643A (en) A kind of neural network computing device and method based on HMC
CN104360982A (en) Implementation method and system for host system directory structure based on reconfigurable chip technology
CN113688065A (en) Near memory computing module and method, near memory computing network and construction method
CN116610630B (en) Multi-core system and data transmission method based on network-on-chip
CN115996200A (en) 3D-IC baseband chip, stacked chip and data processing method
CN113722268B (en) Deposit and calculate integrative chip that piles up
CN115098431A (en) Processor, shared cache allocation method and device
CN116266463A (en) Three-dimensional storage unit, storage method, three-dimensional storage chip assembly and electronic equipment
CN118012794B (en) Computing core particle and electronic equipment
US20240078195A1 (en) Systems, methods, and devices for advanced memory technology
US20240086313A1 (en) Method of sharing memory resource for memory cloud and memory resource sharing system using the same
CN118786419A (en) Computer system

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 710075 4th floor, block a, No.38, Gaoxin 6th Road, Zhangba Street office, Gaoxin District, Xi'an City, Shaanxi Province

Patentee after: Xi'an Ziguang Guoxin Semiconductor Co.,Ltd.

Country or region after: China

Address before: 710075 4th floor, block a, No.38, Gaoxin 6th Road, Zhangba Street office, Gaoxin District, Xi'an City, Shaanxi Province

Patentee before: XI''AN UNIIC SEMICONDUCTORS Co.,Ltd.

Country or region before: China

CP03 Change of name, title or address