US10963780B2 - Yield improvements for three-dimensionally stacked neural network accelerators - Google Patents

Yield improvements for three-dimensionally stacked neural network accelerators Download PDF

Info

Publication number
US10963780B2
US10963780B2 US15/685,672 US201715685672A US10963780B2 US 10963780 B2 US10963780 B2 US 10963780B2 US 201715685672 A US201715685672 A US 201715685672A US 10963780 B2 US10963780 B2 US 10963780B2
Authority
US
United States
Prior art keywords
tile
faulty
tiles
neural network
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US15/685,672
Other versions
US20190065937A1 (en
Inventor
Andreas Georg Nowatzyk
Olivier Temam
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to US15/685,672 priority Critical patent/US10963780B2/en
Assigned to GOOGLE INC. reassignment GOOGLE INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOWATZYK, ANDREAS GEORG, TEMAM, OLIVIER
Assigned to GOOGLE LLC reassignment GOOGLE LLC CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: GOOGLE INC.
Priority to PCT/US2018/047468 priority patent/WO2019040587A1/en
Priority to CN202311177479.1A priority patent/CN117408324A/en
Priority to CN201880038452.5A priority patent/CN110730955B/en
Priority to EP18766435.4A priority patent/EP3635556A1/en
Priority to TW107129464A priority patent/TWI698809B/en
Publication of US20190065937A1 publication Critical patent/US20190065937A1/en
Priority to US17/213,871 priority patent/US11836598B2/en
Publication of US10963780B2 publication Critical patent/US10963780B2/en
Application granted granted Critical
Priority to US18/527,902 priority patent/US20240220773A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/31723Hardware for routing the test signal within the device under test to the circuits to be tested, e.g. multiplexer for multiple core testing, accessing internal nodes
    • G06N3/0454
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1415Saving, restoring, recovering or retrying at system level
    • G06F11/142Reconfiguring to eliminate the error
    • G06F11/1423Reconfiguring to eliminate the error by reconfiguration of paths
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/31718Logistic aspects, e.g. binning, selection, sorting of devices under test, tester/handler interaction networks, Test management software, e.g. software for test statistics or test evaluation, yield analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/28Testing of electronic circuits, e.g. by signal tracer
    • G01R31/317Testing of digital circuits
    • G01R31/31722Addressing or selecting of test units, e.g. transmission protocols for selecting test units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/202Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
    • G06F11/2051Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant in regular structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J50/00Circuit arrangements or systems for wireless supply or distribution of electric power
    • H02J50/10Circuit arrangements or systems for wireless supply or distribution of electric power using inductive coupling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/42Loop networks
    • H04L2012/421Interconnected ring systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/06Deflection routing, e.g. hot-potato routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery

Definitions

  • This specification generally relates to three-dimensionally stacked neural network accelerators.
  • Neural networks are machine learning models that employ one or more layers of nonlinear units to predict an output for a received input.
  • Some neural networks include one or more hidden layers in addition to an output layer. The output of each hidden layer is used as input to the next layer in the network, i.e., the next hidden layer or the output layer.
  • Each layer of the network generates an output from a received input in accordance with current values of a respective set of parameters.
  • neural network processing systems use general purpose graphics processing units, field-programmable gate arrays, application-specific integrated circuits, and other hardware of the like to implement the neural network.
  • one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile.
  • the three-dimensionally stacked neural network accelerator includes a plurality of neural network dies stacked on top of each other, each neural network die including a respective plurality of tiles, each tile has input and output connections that route data into and out of the tile.
  • the three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration and modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.
  • a neural network accelerator can be used to accelerate the computation of a neural network, i.e., the processing of an input using the neural network to generate an output or the training of the neural network to adjust the values of the parameters of the neural network.
  • Three-dimensionally stacked neural network accelerators can be constructed with vertical interconnects that communicatively couple vertically adjacent dies. Three-dimensionally stacked neural network accelerators are cheaper to fabricate and more compact than traditional neural network accelerators. However, traditional mechanisms for fabricating three-dimensionally stacked neural network accelerators make it unlikely that a given three-dimensional neural network accelerators is fabricated with only functional dies, i.e., is fabricated without one or more dies being faulty.
  • Modifying the dataflow configuration for a three-dimensionally stacked neural network accelerator includes altering outputs of given computing tiles to inputs of computing tiles on dies above or below the given computing tiles. Thus, enabling a more modular flow of data throughout the three-dimensionally stacked neural network accelerators.
  • modifying the dataflow configuration will improve the yield of operable three-dimensionally stacked neural network accelerators because having one or more faulty computing tiles will not render the entire accelerator inoperable.
  • Three-dimensionally stacked neural network accelerator yields reduce as the total chip area increases. Modifying dataflow configuration to include transmitting data between vertically adjacent dies increases yields for three-dimensionally stacked neural network accelerators.
  • FIGS. 1A-B are block diagrams of an example three-dimensionally stacked neural network accelerator.
  • FIG. 2 is a block diagram of a computing tile.
  • FIG. 3 is an example block diagram of a bipartite graph.
  • FIG. 4 illustrates an example neural network dataflow configuration
  • FIG. 5 illustrates an example dataflow configuration for a three-dimensionally stacked neural network accelerator.
  • FIG. 6 is a flowchart of an example process for modifying a dataflow configuration for tiles within a three-dimensionally stacked neural network accelerator.
  • the subject matter described in this specification relates to a hardware computing system including multiple computing units configured to accelerate workloads of a neural network.
  • Each computing unit of the hardware computing system is self-contained and can independently execute computations required by a portion, e.g., a given layer, of a multi-layer neural network.
  • a neural network accelerator can be used to accelerate the computation of a neural network, i.e., the processing of an input using the neural network to generate an output or the training of the neural network to adjust the values of the parameters of the neural network.
  • the neural network accelerator has data inputs and outputs.
  • the neural network accelerator receives data, processes the data, and outputs the processed data.
  • a three-dimensionally stacked neural network accelerator uses a plurality of neural network dies stacked on top of each other to increase computing power for a neural network accelerator.
  • Each neural network accelerator die includes a plurality of computing tiles. Each computing tile also has an input, an output, and processes data using a computing tile processor.
  • Tiles are connected together in sequence and the neural network accelerator directs data between each of the tiles according to a dataflow configuration. For example, data is received at a first computing tile, a computation is executed, and the first tile's output is transmitted to the input of a second computing tile, which also completes a computation.
  • a computing tile may be faulty (i.e., not functioning as intended) after the accelerator has been manufactured.
  • the tile may have non-functioning on-die cache memory, damaged intra-die connections, an incorrect clock, and so on, which may render the entire neural network accelerator inoperable.
  • a faulty computing tile is bypassed during computation by the neural network accelerator, i.e., no output is transmitted to the faulty computing tile's input.
  • the output is routed to an input of a different computing tile that is on a die above or below the die that houses the faulty tile.
  • the different computing tile executes its computation, the different computing tile sends its output to an input of another computing tile, e.g., a computing tile that is housed on same neural network die that houses the faulty computing tile.
  • FIGS. 1A-B are block diagrams of an example three-dimensionally stacked neural network accelerator 100 .
  • a neural network accelerator 100 is an integrated circuit that is designed to accelerate the computation of a neural network, i.e., the processing of an input using the neural network to generate an output or the training of the neural network.
  • a three-dimensionally stacked neural network accelerator 100 includes a plurality of neural network accelerator dies 102 a - e stacked on top of each other creating a large-scale neural network accelerator.
  • a neural network accelerator wafer 102 a - e is created using semiconductor material (e.g., silicon, gallium arsenide, indium arsenide, etc.).
  • Neural network accelerator dies 102 a - e are manufactured using traditional semiconductor wafer fabrication techniques.
  • Each of the neural network accelerator dies 102 a - e include a plurality of computing tiles, hereafter referred to as tiles 104 a - p , arranged on a surface of the die 102 a - e.
  • Each tile 104 a - p is an individual computing unit and the tiles 104 a - p collectively perform processing that accelerate computations across the three dimensionally stacked neural network accelerator.
  • a tile 104 a - p is a self-contained computational component configured to execute all or portions of the processing for a neural network computation.
  • Example architectures for the tiles 104 a - p are described in U.S. patent application Ser. No. 15/335,769, which is incorporated herein by reference.
  • each tile is communicatively coupled, using inductive coupling, through silicon via (TSV), or wired to one or more adjacent tiles including adjoining tiles, tiles above each other, tiles below each other, etc.).
  • TSV silicon via
  • FIG. 2 is a block diagram of a computing tile 104 a .
  • Each computing tile 104 a includes a processing element 202 , a switch 204 , inductive coils 206 a and 206 b , and a processing element bypass 208 .
  • the components illustrated in FIG. 2 are not drawn to scale.
  • the processing unit 202 consumes most of the tile area and is more likely to have a defect or fault. Defect density of computing tiles is uniform, thus the large area of the processing element 202 causes the processing element 202 the component most likely to fail.
  • the processing element receives an input from an output of the switch 204 .
  • the processing element 202 executes computations for the neural network accelerator.
  • the output of the processing element 202 can be transmitted to the input of a switch 204 of a different tile 104 a - p.
  • the tiles 104 a - p include inductive coils 206 a, b .
  • FIG. 2 illustrates a tile 104 a with two inductive coils 206 a, b , typically, and in some implementations, the computing tile can include between 10 and 1000 coils.
  • the inductive coils enable inductive coupling of vertically adjacent tiles 104 a - p using magnetic fields between the tiles 104 a - p .
  • the inductive coupling of tiles 104 a - p enables tiles 104 a - p on different dies to communicate using near field wireless communication.
  • Each tile 104 a - p communicates with adjacent tiles above or below the tile using the inductive coupling.
  • the first tile 104 a on the top die 102 a can transmit and receive data from the first tile 104 a on the second die 102 b located under the top die 102 a.
  • tiles 104 a - p communicate with adjacent tiles directly above or below itself using inductive coupling.
  • tiles within a three-dimensionally stacked neural network accelerator can communicate with tiles on any die 102 a - e within the three-dimensionally stacked neural network accelerator 100 .
  • a particular tile positioned on a given die can vertically communicate with tiles above or below the particular tile on any of the other 6 stacked dies.
  • One way of implementing the near-field communication technology is described in “ThruChip Interface for 3D system integration” by Tadahiro Kuroda at http://ieeexplore.ieee.org/document/5496681/.
  • the inductive coils 206 a, b included in the tile 104 a are a receiver coil and a transmitter coil.
  • the receiver and transmitter coils can each include a plurality of inductive coils coupled together to transmit and receive data from vertically adjacent tiles.
  • the plurality of inductive coils are coupled together to achieve the desired bandwidth and magnetic fields to communicate data between vertically adjacent tiles.
  • Either of the inductive coils 206 a, b can be selected to be the receiver coil or the transmitter coil according to the determined dataflow configuration.
  • the receiver and transmitter coil respectively and independently receive or transmit data between tiles 104 a - p on different dies.
  • the coils each produce a magnetic field and using the magnetic field the tiles communicate using near field communication.
  • the magnetic field belonging to a transmitter coil on a given tile 104 a - p is coupled to a magnetic field belonging to a receiver coil 104 a - p of a different tile.
  • the two coils transfer data by using the magnetic field created by the inductive coupling as a carrier signal.
  • the inductive coils 206 a, b can each be selectively chosen as the receiver coil or the transmitter coil.
  • the inductive coils 206 a, b are determined to be a receiver coil or a transmitter coil based on the configuration of the inputs and the outputs of the switch 204 .
  • the inductive coil that receives an output of the switch is the transmitter coil as it will transmit the received data to a vertically adjacent tile.
  • the transceiver that transmits data to an input of switch is the receiver coil because this transceiver transmits the data it receives from a vertically adjacent coil. Modifying the variable inputs and outputs that are defined by the configuration of the switch enables the static interconnect configuration to be changed to determine various dataflow configurations.
  • each tile can also communicate with vertically adjacent tiles using through-silicon vias (TSV).
  • TSV through-silicon vias
  • a TSV is a vertical electrical connection that passed through the die.
  • Outputs of processing units can be passed to the input of a switch 204 belonging to a vertically adjacent die using TSVs.
  • Each tile includes a switch 204 that is coupled to a plurality of inputs and includes a plurality of outputs.
  • the switch has four inputs (i.e., inputs A, B, C, and D) and four outputs (i.e., outputs W, X, Y, and Z).
  • the switch 204 can direct any of the plurality of inputs received at the switch to any of the plurality of switch outputs.
  • the input A can be a processing element bypass of an adjacent tile.
  • Input B can be the output of the processing element 202 of an adjacent tile.
  • Either inductive coil can be selected as the receiver coil or the transmitter coil.
  • inductive coil A 206 a is receiver coil
  • input C can be data received at inductive coil A 206 a and transmitted to the switch 204 .
  • input D can be data received at inductive coil B 206 b and transmitted to the switch 204 .
  • the switch 204 can transmit data from any of the inputs, inputs A, B, C, or D, to any of the outputs, outputs W, X, Y, and Z.
  • output W can direct data to a processing element bypass.
  • the processing element bypass provides a data transmission path that bypasses the processing element 202 .
  • Output W enables data to transmitted out of the processing tile without transmitting the data to the processing element 202 .
  • the processing element 202 could be faulty. Therefore, the processing element 202 is bypassed, using the processing element bypass, to ensure continuity of the ring bus.
  • Output X of the switch 204 is coupled to the input of the processing element 202 .
  • Outputs Y and Z are each coupled to the inputs of inductive coil A and B 206 a, b . Outputs Y and Z can be selectively chosen to direct data that is transmitted to inductive coils of vertically adjacent tiles.
  • tiles 104 a - p communicating with tiles 104 a - p on different dies 102 a - e use the inductive coupling of the inductive coils to transmit input data from tiles on different dies 102 a - e .
  • the first tile 104 a on the top die 102 a communicates with the first tile 104 a on the die 102 b below the top die 102 a
  • the first tile 104 a on the top die's switch is coupled to output Y.
  • Output Y transmits data to the transmitting inductive coil, which directs the data to the receiving inductive coil of the first tile 104 a on die 102 b below.
  • the receiving inductive coil directs the data to the switch 204 of the first tile 104 a on die 102 b below.
  • the switch 204 can direct the data to either of the available switch outputs.
  • tiles 104 a - p communicating with tiles 104 a - p on different dies 102 a - e use through silicon via technology to transmit the data.
  • the switch 204 can include one or more multiplexers and one or more demultiplexers.
  • a multiplexer includes two or more selectable inputs and one output.
  • the demultiplexer includes one input and two or more selectable outputs. Accordingly, and in this instance, the switch uses the multiplexer to receive either of the four inputs, and the output of the multiplexer is coupled to the input of the demultiplexer.
  • the outputs of the demultiplexer are the four outputs of the switch, outputs W, X, Y, and Z.
  • FIG. 3 is an example block diagram of a bipartite graph.
  • the bipartite graph illustrates dataflow configuration and components of a neural network architecture.
  • the edges connect the input vertices (I 1 - 5 ) to the output vertices (O 1 - 5 ) to represent computing tiles.
  • a computing tile is represented by a particular input vertex, a particular output vertex connected together with an edge.
  • a dataflow configuration between vertices is illustrated with solid edges. For example, an edge that goes from an output vertex to an input vertex illustrates the transmission of data from the output of one tile to the input of another tile.
  • a redundant dataflow configuration is illustrated with dashed edges. The dashed edges represent alternative dataflow paths between the vertices in the instance a tile is deemed faulty and is bypassed.
  • a tile If a tile is faulty, the corresponding edge between an input vertex and a corresponding output vertex is removed from the graph. This illustrates that there is no data transmitted from the input of the computing tile to the output of the computing tile and the processing element 202 does not execute any computations. If the switch 204 of the computing tile is still functional, the vertices remain in the network graph because the processing element 202 of a tile can still be bypassed using the processing element bypass. Edges from the output vertices to the input vertices represent the possible connections that the switches and the vertical communicative coupling can realize. There will be multiple allowable edges per vertex representing the possible configurations that the switches can be configured to direct inputs to outputs.
  • a Hamiltonian circuit is applied to the graph.
  • the Hamiltonian circuit can illustrate a ring bus that is a closed tour of data propagations such that each active vertex receives and transmits data exactly once.
  • the Hamiltonian circuit is the maximum length circuit that can be achieved by incorporating each functional vertex.
  • the three dimensionally stacked neural network accelerator offers more alternate paths for dataflow configurations than a two dimensional neural network accelerator. Therefore, the probability that an optimal or near optimal configuration (e.g., a Hamiltonian circuit) can be found is higher for the three dimensionally stacked neural network accelerator than for a two dimensional network accelerator.
  • FIG. 4 illustrates an example neural network die 400 and a dataflow configuration for the example neural network die's tiles 104 a - p .
  • the tiles 104 a - p can be arranged in any arrangement on the die 200 .
  • the tiles 104 a - p are organized in a rectangular arrangement such that tiles located on vertically adjacent dies are configured in the same position.
  • the first tile 104 a on the first die 102 a is located above the first tile 104 a on the second die 102 b , which is located above the first tile 104 a on the third die 102 c , etc.
  • inputs and outputs of vertically adjacent tiles have a mirrored or rotational symmetry.
  • the inputs and outputs of the first tile 104 a on the first die 102 a are positionally located on the die in the same orientation as the inputs and outputs for the first tile 104 a on the second die 102 b located in the stack above or below the first die 102 a.
  • Each tile 104 a - p is communicatively coupled with the tile's neighboring tiles on the die 400 .
  • the first tile 104 a is communicatively coupled with the second tile 104 b .
  • tiles 104 a - p can be communicatively coupled in any configuration.
  • Tiles 104 a - p on neural network die 102 a can be connected together using wired connections. The wired connections enable transmission of data between each connected tile 104 a - p.
  • Each tile communicates with one or more adjacent tiles 104 a - p to create a Hamiltonian circuit representation using the tiles 104 a - p .
  • the circuit includes a communication scheme such that there is an uninterrupted flow of tile inputs connected to tile outputs, from the beginning of the ring bus to the end of the ring bus.
  • the tiles 104 a - p are configured such that the input and output of each functional tile within the ring network is connected to another functional tile or external source according to a dataflow configuration.
  • the dataflow configuration describes a path of computational data propagation through the tiles 104 a - p within a three-dimensional neural network architecture.
  • a dataflow configuration 402 may specify that a first tile 104 a , on a die 102 a , receives input data from an external source.
  • the external source can be a tile 104 a - b on a different neural network accelerator die 102 b - e , or some other source that transmits data.
  • the first tile 104 a executes computations using the data and transmits the data to a second tile 104 b .
  • the second tile 104 b computes the data, and transmits the data to a third tile 104 c .
  • the process continues along the first row of tiles until the data reaches a fourth tile 104 d .
  • the fourth tile 104 d transmits the data to a fifth tile 104 h.
  • the process continues until the data reaches tile 104 e .
  • data is transmitted to the ninth tile 104 i .
  • the data is propagated along across the third row of tiles to the twelfth tile 104 l .
  • the twelfth tile 104 l transmits the data to the thirteenth tile 104 p .
  • the dataflow configuration continues to transmit the data to the sixteenth tile 104 m , where the sixteenth tile 104 m transmits the data to an external source or back to the first tile 104 a .
  • the dataflow configuration for the tiles 104 a - p on the first die 102 a is 104 a - b - c - d - h - g - f - e - i - j - k - l - p - o - n - m .
  • the dataflow configuration can be a different path of data travel through the set of tiles.
  • the dataflow configuration is specified based on which switch input is connected to which tile's output. Because there are a plurality of tiles 104 a - p , each tile with a respective output, and because the switch's input can be varied to receive different tiles' outputs, many different dataflow configurations can be achieved.
  • some tiles 104 a - p may be faulty after production of the three dimensionally stacked neural network accelerator 100 .
  • Post-production tests are executed after stacking the dies to identify which tiles are faulty.
  • the identified faulty tiles are bypassed, and a dataflow configuration is established that eliminates the use of the faulty tiles.
  • the tile configuration includes a redundant data path 304 that can be implemented to bypass faulty tiles 104 a - p .
  • Eliminating a faulty tile(s) can include directing computational data to every other tile in the three-dimensionally stacked neural network except the identified faulty tiles.
  • Each other tile will perform its designated computational functions as part performing the computation of the neural network.
  • the other tiles can collectively perform the computations of the identified faulty tiles or one tile can be dedicated to perform the computations of the identified faulty tile.
  • FIG. 5 illustrates an example dataflow configuration 500 for a three-dimensionally stacked neural network accelerator.
  • the neural network accelerator includes two neural network dies, a top die 102 a and a bottom die 102 b arranged using a pair of inductively coupled connections to aggregate two rings together to form one ring of 16 tiles distributed over two chips stacked vertically.
  • the three-dimensionally stacked neural network accelerator includes more than two dies stacked together (e.g., 3-10 dies).
  • Each die includes a plurality of tiles 104 a - h .
  • the tiles 104 a - h on both dies process data to perform neural network computations according to the dataflow configuration of data propagation through the tiles 104 a - h.
  • tile 104 f on the top die 102 a is a total die failure.
  • a total die failure occurs when the bypass fails or the switch fails and the processing unit fails.
  • the entire die would be a total failure because the dataflow configuration could not create a continuous dataflow path around the die.
  • tile 104 f on the top die 102 a is a total die failure, neighboring tiles cannot output data to tile 104 f because there is no way for tile 104 f to output data to another tile 104 a - h .
  • no tiles output data to the input of tile 104 f and tile 104 f is completely bypassed by transmitting data to tiles adjacent and vertically adjacent to tile 104 f.
  • Tile 104 g on the top die 102 a and tile 104 c on the bottom die 102 b are partial failures.
  • a partial failure occurs where a processing element 202 fails, but the switch and data path is still functional.
  • a tile that is experiencing a partial failure can still receive input data because the tile's switch 204 is still functional. Therefore, the partially failed tile's switch 204 can output to data to different tiles 104 a - h using the transmitting inductive coils or output data using the processing element bypass 208 .
  • tile 104 g on the top die 102 a and tile 104 c on the bottom die 102 b utilize the transmitting inductive coils to output data to the vertically adjacent tiles.
  • tile 104 g on the top die 102 a and tile 104 c on the bottom die 102 b could either both or uniquely utilize the processing element bypass 208 to output data to adjacent tiles 104 a - h.
  • FIG. 6 is an example flow chart of a process executed by a system of one or more computers for modifying a dataflow configuration for tiles within a three-dimensionally stacked neural network accelerator 600 . Aspects of FIG. 6 will be discussed in connection with FIG. 5 .
  • the process can be performed by a host computer to which the accelerator is coupled.
  • Three-dimensionally stacked neural network accelerator tiles are tested to determine functionality of each of the plurality of tiles on the neural network wafers and identify faulty tiles.
  • Techniques for die testing include using out of band control channels, for example, joint test action group scan chain, to test each of the tiles within the three-dimensionally stacked neural network accelerator. The test identifies which tiles are faulty and are to be bypassed to create the Hamiltonian circuit.
  • the system is configured to determine the Hamiltonian circuit based on the arrangement of functional tiles 104 a - h within the three-dimensionally stacked neural network accelerator.
  • the system can implement the dataflow configuration using the remaining functional tiles 104 a - h of the three dimensional neural network accelerator according to the determined Hamiltonian circuit.
  • the testing can occur prior to or after stacking the neural network wafers 102 a - b .
  • testing can occur prior to cutting the larger fabricated wafers into smaller dies designed for the three-dimensionally stacked-neural network accelerators. In this instance, each tile on the larger fabricated wafer is tested for functionality.
  • the dies are stacked together to create the three-dimensionally stacked neural network accelerator, and each tile is tested for functionality. In either instance, the tiles 104 a - p are tested prior to executing computations on the three-dimensionally stacked neural network accelerator.
  • three-dimensionally stacked neural network accelerators are constantly analyzed during operation of the neural network accelerator to determine tiles, that may have been operational during the initial functional testing, but have since failed or become faulty.
  • the process includes determining that a tile from a plurality of tiles in a three-dimensional stacked neural network accelerator is a faulty tile ( 402 ).
  • a faulty tile is a tile that does not function as designed based on the analyzing.
  • the three-dimensionally stacked neural network accelerator comprises a plurality of neural network accelerator dies 102 a - e stacked on top of each other and each neural network accelerator die includes a respective plurality of tiles 104 a - p .
  • Each tile has a plurality of input and output connections that transmit data into and out of the tile. The data is used to execute neural network computations.
  • the sixth and seventh tile 104 c on the top die 102 a and the third tile on the bottom die 102 b has been determined to be faulty.
  • Faulty tiles 104 f and g on the top die 102 a and 104 c on the bottom die 102 b are bypassed and removed from the dataflow configuration.
  • Removing the faulty tiles includes modifying the dataflow configuration to transmit an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile ( 404 ).
  • bypassing the faulty tile includes removing power that is provided to the faulty tile 104 c .
  • Power can be removed from the faulty tile 104 c by disconnecting a switch that provides power to the faulty tile or using programming logic to remove power provided to the tile. Removing power provided to the faulty tile 104 c ensures that the faulty tile is not operational and the faulty tile 104 c does not draw unnecessary power from a power source providing power to the three-dimensionally stacked neural network accelerator. Further, data is transmitted either around the faulty tile or through the faulty tile, but not to the processing element 202 of the faulty tile.
  • bypassing the faulty tile 104 c can include turning off a clock that is unique to the faulty tile.
  • the faulty tile's clock can be disabled using programming logic or physically removing the clock from the circuit by disconnecting the clock's output. Turning off the faulty tile's clock stops the clock from executing processing functions, thereby deactivating the faulty tile and ceasing the faulty tile's operation.
  • the faulty tiles are bypassed and the dataflow configuration is modified according to a Hamiltonian circuit representation of the tile dataflow configuration.
  • one implementation can include having the first tile 104 a on the top die 102 receives input data from an external source ( 1 ). The first tile 104 a processes the data, and the first tile 104 a transmits output data to the switch of the second tile 104 b .
  • the switch 202 of the second tile 104 b of the top die 102 a directs the data to the transmitting inductive coil (T) and transmits the data to the second tile 104 b of the bottom die 102 b .
  • T transmitting inductive coil
  • the inductive coils have been presented as single coils, however, as previously mentioned, each of the inductive coils are a plurality of coils communicatively coupled to transmit and receive data from other coils on vertically adjacent tiles.
  • the second tile 104 b of the bottom die 102 b processes the data ( 2 ) and transmits the data to the third tile 104 c of the bottom die 102 b . Since, the third tile 104 c of the bottom die 102 b was determined to be faulty, the processing element is bypassed and the data is sent to the transmitting inductive coil (T), which transmits the data using inductive coupling to the receiving inductive coil (R) of the third tile 104 c of the top die 102 b . The processing element processes the data ( 3 ) and transmits the data to the switch 202 of the fourth tile 104 d of the top die 102 a .
  • the switch transmits the data to the transmitting inductive coil (T) of the fourth tile 104 d of the top die 102 a , which transmits the data to the fourth tile 104 d of the bottom die 102 b .
  • the fourth tile 104 d of the bottom die 102 b processes the data ( 4 ) and the output is transmitted to the eight tile 104 h of the bottom die 102 b.
  • the processing element of the eighth tile 104 h processes the data and the output is transmitted to the switch of the seventh tile 104 g on the bottom die 102 b .
  • the seventh tile 104 g processes the data ( 6 ) and the output is transmitted to the switch of the sixth tile 104 f of the bottom die 102 b .
  • the sixth tile 104 f processes the data ( 7 ) and the output is transmitted to the switch of the fifth tile 104 e on the bottom die 102 b .
  • the fifth tile's processing element processes the data ( 8 ) and the output is transmitted to the first tile 104 a of the bottom die 102 b .
  • the first tile 104 a processes the data ( 9 ) and the output is transmitted to the switch of the second tile 104 b of the bottom die 102 b.
  • the second tile's switch transmits the data the second tile's transmitting inductive coil (T), which transmits the data to the receiving inductive coil (R) of the second tile 104 b of the top die 102 a .
  • the second tile's processing element processes the data ( 10 ) and the output is transmitted to the switch of the third tile 104 c of the top die 102 b .
  • the data is transmitted using the processing element bypass to bypass the third tile's processing element and the data is transmitted to the switch of the fourth tile 104 d of the top die 102 a .
  • the fourth tile 104 d processes the data ( 11 ) and transmits the data to the eight tile 104 h of the top die 102 a.
  • the eight tile 104 h processes the data ( 12 ) and the output is transmitted to the switch of the seventh tile 104 g of the top die 102 a .
  • the seventh tile 104 g of the top die was determined to be faulty during testing. Therefore, the seventh tile's switch transmits the data to the transmitting inductive coil (T) of the seventh tile 104 g .
  • the transmitting inductive coil (T) directs the data to the receiving inductive coil (R) of the seventh tile 104 g of the bottom die 102 b .
  • the seventh tile's switch directs the data to the sixth tile 104 f of the bottom die 102 b using the processing element bypass.
  • the switch of the sixth tile 104 f transmits the data to the switch of the fifth tile 104 e of the bottom die 102 b using the sixth tile's processing element bypass.
  • the fifth tile 104 e directs the data to the transmitting inductive coil (T) of the fifth tile 104 e , which transmits the data to the receiving inductive coil (R) of the fifth tile 104 e of the top die 102 a .
  • the fifth tile 104 e processes the data ( 13 ) and the output is either directed back to the first tile 102 a of the top die 102 a or directed to an external device.
  • the sixth die 104 f of the top die 102 a was identified as faulty. In this example, no data was routed to the sixth die 104 f and the sixth die 104 f was completely bypassed.
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus.
  • the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output(s).
  • the processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application specific integrated circuit), or a GPGPU (General purpose graphics processing unit).
  • special purpose logic circuitry e.g., an FPGA (field programmable gate array), an ASIC (application specific integrated circuit), or a GPGPU (General purpose graphics processing unit).
  • Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Neurology (AREA)
  • Multi Processors (AREA)
  • Signal Processing (AREA)

Abstract

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for three-dimensionally stacked neural network accelerators. In one aspect, a method includes obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile. The three-dimensionally stacked neural network accelerator includes a plurality of neural network dies, each neural network die including a respective plurality of tiles, each tile has input and output connections. The three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration and modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.

Description

BACKGROUND
This specification generally relates to three-dimensionally stacked neural network accelerators.
Neural networks are machine learning models that employ one or more layers of nonlinear units to predict an output for a received input. Some neural networks include one or more hidden layers in addition to an output layer. The output of each hidden layer is used as input to the next layer in the network, i.e., the next hidden layer or the output layer. Each layer of the network generates an output from a received input in accordance with current values of a respective set of parameters.
Typically, neural network processing systems use general purpose graphics processing units, field-programmable gate arrays, application-specific integrated circuits, and other hardware of the like to implement the neural network.
SUMMARY
In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include the actions of obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile. The three-dimensionally stacked neural network accelerator includes a plurality of neural network dies stacked on top of each other, each neural network die including a respective plurality of tiles, each tile has input and output connections that route data into and out of the tile. The three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration and modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.
Other embodiments of this aspect include corresponding systems, apparatus, and computer programs, configured to perform the actions of the methods, encoded on computer storage devices.
A neural network accelerator can be used to accelerate the computation of a neural network, i.e., the processing of an input using the neural network to generate an output or the training of the neural network to adjust the values of the parameters of the neural network. Three-dimensionally stacked neural network accelerators can be constructed with vertical interconnects that communicatively couple vertically adjacent dies. Three-dimensionally stacked neural network accelerators are cheaper to fabricate and more compact than traditional neural network accelerators. However, traditional mechanisms for fabricating three-dimensionally stacked neural network accelerators make it unlikely that a given three-dimensional neural network accelerators is fabricated with only functional dies, i.e., is fabricated without one or more dies being faulty.
Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. For three-dimensionally stacked neural network accelerators, the more computing tiles that are stacked on top of each other, the higher the probability that the whole stack is faulty. This occurs because if one computing tile is faulty, this may render the entire three-dimensionally stacked neural network accelerator inoperable, resulting in potentially poor yield of operable three-dimensionally stacked neural network accelerators. However, modifying dataflow configurations for three-dimensionally stacked neural network accelerators increases functionality for the three-dimensionally stacked accelerators. For example, modifying the dataflow configuration allows the three-dimensionally stacked neural network accelerator to still be usable even if one or more computing tiles are faulty.
Attempting to use the faulty tiles will render the entire three-dimensionally stacked neural network accelerator useless. Therefore, the faulty tiles are bypassed to ensure functionality of the remaining portions of the three-dimensionally stacked neural network accelerator. Modifying the dataflow configuration for a three-dimensionally stacked neural network accelerator includes altering outputs of given computing tiles to inputs of computing tiles on dies above or below the given computing tiles. Thus, enabling a more modular flow of data throughout the three-dimensionally stacked neural network accelerators. In addition, modifying the dataflow configuration will improve the yield of operable three-dimensionally stacked neural network accelerators because having one or more faulty computing tiles will not render the entire accelerator inoperable. Three-dimensionally stacked neural network accelerator yields reduce as the total chip area increases. Modifying dataflow configuration to include transmitting data between vertically adjacent dies increases yields for three-dimensionally stacked neural network accelerators.
The details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other potential features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A-B are block diagrams of an example three-dimensionally stacked neural network accelerator.
FIG. 2 is a block diagram of a computing tile.
FIG. 3 is an example block diagram of a bipartite graph.
FIG. 4 illustrates an example neural network dataflow configuration.
FIG. 5 illustrates an example dataflow configuration for a three-dimensionally stacked neural network accelerator.
FIG. 6 is a flowchart of an example process for modifying a dataflow configuration for tiles within a three-dimensionally stacked neural network accelerator.
Like reference numbers and designations in the various drawings indicate like elements.
DETAILED DESCRIPTION
The subject matter described in this specification relates to a hardware computing system including multiple computing units configured to accelerate workloads of a neural network. Each computing unit of the hardware computing system is self-contained and can independently execute computations required by a portion, e.g., a given layer, of a multi-layer neural network.
A neural network accelerator can be used to accelerate the computation of a neural network, i.e., the processing of an input using the neural network to generate an output or the training of the neural network to adjust the values of the parameters of the neural network. The neural network accelerator has data inputs and outputs. The neural network accelerator receives data, processes the data, and outputs the processed data. A three-dimensionally stacked neural network accelerator uses a plurality of neural network dies stacked on top of each other to increase computing power for a neural network accelerator. Each neural network accelerator die includes a plurality of computing tiles. Each computing tile also has an input, an output, and processes data using a computing tile processor.
Tiles are connected together in sequence and the neural network accelerator directs data between each of the tiles according to a dataflow configuration. For example, data is received at a first computing tile, a computation is executed, and the first tile's output is transmitted to the input of a second computing tile, which also completes a computation. In some instances, a computing tile may be faulty (i.e., not functioning as intended) after the accelerator has been manufactured. For example, the tile may have non-functioning on-die cache memory, damaged intra-die connections, an incorrect clock, and so on, which may render the entire neural network accelerator inoperable.
However, according to the systems and methods described herein, a faulty computing tile is bypassed during computation by the neural network accelerator, i.e., no output is transmitted to the faulty computing tile's input. Instead of routing an output of one tile to the input of the faulty tile, the output is routed to an input of a different computing tile that is on a die above or below the die that houses the faulty tile. After the different computing tile executes its computation, the different computing tile sends its output to an input of another computing tile, e.g., a computing tile that is housed on same neural network die that houses the faulty computing tile. This bypasses the faulty tile and enables the use of the three-dimensionally stacked neural network accelerator even with one or more faulty tiles.
FIGS. 1A-B are block diagrams of an example three-dimensionally stacked neural network accelerator 100. A neural network accelerator 100 is an integrated circuit that is designed to accelerate the computation of a neural network, i.e., the processing of an input using the neural network to generate an output or the training of the neural network. A three-dimensionally stacked neural network accelerator 100 includes a plurality of neural network accelerator dies 102 a-e stacked on top of each other creating a large-scale neural network accelerator.
Typically, a neural network accelerator wafer 102 a-e is created using semiconductor material (e.g., silicon, gallium arsenide, indium arsenide, etc.). Neural network accelerator dies 102 a-e are manufactured using traditional semiconductor wafer fabrication techniques. Each of the neural network accelerator dies 102 a-e include a plurality of computing tiles, hereafter referred to as tiles 104 a-p, arranged on a surface of the die 102 a-e.
Each tile 104 a-p is an individual computing unit and the tiles 104 a-p collectively perform processing that accelerate computations across the three dimensionally stacked neural network accelerator. Generally, a tile 104 a-p is a self-contained computational component configured to execute all or portions of the processing for a neural network computation. Example architectures for the tiles 104 a-p are described in U.S. patent application Ser. No. 15/335,769, which is incorporated herein by reference.
The tiles on each die are connected according to a static interconnect system. For example, each tile is communicatively coupled, using inductive coupling, through silicon via (TSV), or wired to one or more adjacent tiles including adjoining tiles, tiles above each other, tiles below each other, etc.). The configuration of a static interconnect system will be described in more detail below in connection with FIG. 3.
FIG. 2 is a block diagram of a computing tile 104 a. Each computing tile 104 a includes a processing element 202, a switch 204, inductive coils 206 a and 206 b, and a processing element bypass 208. The components illustrated in FIG. 2 are not drawn to scale. Typically, the processing unit 202 consumes most of the tile area and is more likely to have a defect or fault. Defect density of computing tiles is uniform, thus the large area of the processing element 202 causes the processing element 202 the component most likely to fail. The processing element receives an input from an output of the switch 204. The processing element 202 executes computations for the neural network accelerator. The output of the processing element 202 can be transmitted to the input of a switch 204 of a different tile 104 a-p.
Some or all of the tiles 104 a-p include inductive coils 206 a, b. Although, FIG. 2 illustrates a tile 104 a with two inductive coils 206 a, b, typically, and in some implementations, the computing tile can include between 10 and 1000 coils. The inductive coils enable inductive coupling of vertically adjacent tiles 104 a-p using magnetic fields between the tiles 104 a-p. The inductive coupling of tiles 104 a-p enables tiles 104 a-p on different dies to communicate using near field wireless communication. Each tile 104 a-p communicates with adjacent tiles above or below the tile using the inductive coupling. For example, the first tile 104 a on the top die 102 a can transmit and receive data from the first tile 104 a on the second die 102 b located under the top die 102 a.
Typically, tiles 104 a-p communicate with adjacent tiles directly above or below itself using inductive coupling. However, in some implementations, tiles within a three-dimensionally stacked neural network accelerator can communicate with tiles on any die 102 a-e within the three-dimensionally stacked neural network accelerator 100. For example, for a three dimensionally stacked neural network accelerator with 7 stacked dies, a particular tile positioned on a given die can vertically communicate with tiles above or below the particular tile on any of the other 6 stacked dies. One way of implementing the near-field communication technology is described in “ThruChip Interface for 3D system integration” by Tadahiro Kuroda at http://ieeexplore.ieee.org/document/5496681/.
The inductive coils 206 a, b included in the tile 104 a are a receiver coil and a transmitter coil. The receiver and transmitter coils can each include a plurality of inductive coils coupled together to transmit and receive data from vertically adjacent tiles. The plurality of inductive coils are coupled together to achieve the desired bandwidth and magnetic fields to communicate data between vertically adjacent tiles. Either of the inductive coils 206 a, b can be selected to be the receiver coil or the transmitter coil according to the determined dataflow configuration. The receiver and transmitter coil respectively and independently receive or transmit data between tiles 104 a-p on different dies. The coils each produce a magnetic field and using the magnetic field the tiles communicate using near field communication. For example, the magnetic field belonging to a transmitter coil on a given tile 104 a-p is coupled to a magnetic field belonging to a receiver coil 104 a-p of a different tile. The two coils transfer data by using the magnetic field created by the inductive coupling as a carrier signal.
The inductive coils 206 a, b can each be selectively chosen as the receiver coil or the transmitter coil. The inductive coils 206 a, b are determined to be a receiver coil or a transmitter coil based on the configuration of the inputs and the outputs of the switch 204. For example, the inductive coil that receives an output of the switch is the transmitter coil as it will transmit the received data to a vertically adjacent tile. The transceiver that transmits data to an input of switch is the receiver coil because this transceiver transmits the data it receives from a vertically adjacent coil. Modifying the variable inputs and outputs that are defined by the configuration of the switch enables the static interconnect configuration to be changed to determine various dataflow configurations.
In some implementations, each tile can also communicate with vertically adjacent tiles using through-silicon vias (TSV). A TSV is a vertical electrical connection that passed through the die. Outputs of processing units can be passed to the input of a switch 204 belonging to a vertically adjacent die using TSVs.
Each tile includes a switch 204 that is coupled to a plurality of inputs and includes a plurality of outputs. In some implementations and as shown in FIG. 2, the switch has four inputs (i.e., inputs A, B, C, and D) and four outputs (i.e., outputs W, X, Y, and Z). The switch 204 can direct any of the plurality of inputs received at the switch to any of the plurality of switch outputs. In this instance, the input A can be a processing element bypass of an adjacent tile. Input B can be the output of the processing element 202 of an adjacent tile. Either inductive coil can be selected as the receiver coil or the transmitter coil. Whichever inductive coil is configured as the transmitter coil sends data to vertically adjacent tiles and whichever coil is configured as the receiver coil receives data from vertically adjacent tiles. In the instance where inductive coil A 206 a is receiver coil, input C can be data received at inductive coil A 206 a and transmitted to the switch 204. Alternatively, and based on the selected dataflow configuration, in the instance where inductive coil B 206 b is the receiver coil, input D can be data received at inductive coil B 206 b and transmitted to the switch 204.
The switch 204 can transmit data from any of the inputs, inputs A, B, C, or D, to any of the outputs, outputs W, X, Y, and Z. In this instance, output W can direct data to a processing element bypass. The processing element bypass provides a data transmission path that bypasses the processing element 202. Output W enables data to transmitted out of the processing tile without transmitting the data to the processing element 202. In this instance, the processing element 202 could be faulty. Therefore, the processing element 202 is bypassed, using the processing element bypass, to ensure continuity of the ring bus. Output X of the switch 204 is coupled to the input of the processing element 202. Thus, data transmitted by output X of the switch is transmitted to the processing element 202 such that the processing element 202 uses the data for neural network accelerator computations. Outputs Y and Z are each coupled to the inputs of inductive coil A and B 206 a, b. Outputs Y and Z can be selectively chosen to direct data that is transmitted to inductive coils of vertically adjacent tiles.
In some implementations, tiles 104 a-p communicating with tiles 104 a-p on different dies 102 a-e use the inductive coupling of the inductive coils to transmit input data from tiles on different dies 102 a-e. For example, in the instance where the first tile 104 a on the top die 102 a communicates with the first tile 104 a on the die 102 b below the top die 102 a, the first tile 104 a on the top die's switch is coupled to output Y. Output Y transmits data to the transmitting inductive coil, which directs the data to the receiving inductive coil of the first tile 104 a on die 102 b below. The receiving inductive coil directs the data to the switch 204 of the first tile 104 a on die 102 b below. The switch 204 can direct the data to either of the available switch outputs. In other implementations, tiles 104 a-p communicating with tiles 104 a-p on different dies 102 a-e use through silicon via technology to transmit the data.
In some implementations, the switch 204 can include one or more multiplexers and one or more demultiplexers. A multiplexer includes two or more selectable inputs and one output. The demultiplexer includes one input and two or more selectable outputs. Accordingly, and in this instance, the switch uses the multiplexer to receive either of the four inputs, and the output of the multiplexer is coupled to the input of the demultiplexer. The outputs of the demultiplexer are the four outputs of the switch, outputs W, X, Y, and Z.
FIG. 3 is an example block diagram of a bipartite graph. The bipartite graph illustrates dataflow configuration and components of a neural network architecture. The edges connect the input vertices (I1-5) to the output vertices (O1-5) to represent computing tiles. In this example, a computing tile is represented by a particular input vertex, a particular output vertex connected together with an edge. A dataflow configuration between vertices is illustrated with solid edges. For example, an edge that goes from an output vertex to an input vertex illustrates the transmission of data from the output of one tile to the input of another tile. A redundant dataflow configuration is illustrated with dashed edges. The dashed edges represent alternative dataflow paths between the vertices in the instance a tile is deemed faulty and is bypassed.
If a tile is faulty, the corresponding edge between an input vertex and a corresponding output vertex is removed from the graph. This illustrates that there is no data transmitted from the input of the computing tile to the output of the computing tile and the processing element 202 does not execute any computations. If the switch 204 of the computing tile is still functional, the vertices remain in the network graph because the processing element 202 of a tile can still be bypassed using the processing element bypass. Edges from the output vertices to the input vertices represent the possible connections that the switches and the vertical communicative coupling can realize. There will be multiple allowable edges per vertex representing the possible configurations that the switches can be configured to direct inputs to outputs.
To bypass a faulty tile and increase three dimensionally stacked neural network yield, a Hamiltonian circuit is applied to the graph. The Hamiltonian circuit can illustrate a ring bus that is a closed tour of data propagations such that each active vertex receives and transmits data exactly once. The Hamiltonian circuit is the maximum length circuit that can be achieved by incorporating each functional vertex. The three dimensionally stacked neural network accelerator offers more alternate paths for dataflow configurations than a two dimensional neural network accelerator. Therefore, the probability that an optimal or near optimal configuration (e.g., a Hamiltonian circuit) can be found is higher for the three dimensionally stacked neural network accelerator than for a two dimensional network accelerator.
FIG. 4 illustrates an example neural network die 400 and a dataflow configuration for the example neural network die's tiles 104 a-p. Generally, however, the tiles 104 a-p can be arranged in any arrangement on the die 200. The tiles 104 a-p are organized in a rectangular arrangement such that tiles located on vertically adjacent dies are configured in the same position. For example, the first tile 104 a on the first die 102 a is located above the first tile 104 a on the second die 102 b, which is located above the first tile 104 a on the third die 102 c, etc. In addition, inputs and outputs of vertically adjacent tiles have a mirrored or rotational symmetry. For example, the inputs and outputs of the first tile 104 a on the first die 102 a are positionally located on the die in the same orientation as the inputs and outputs for the first tile 104 a on the second die 102 b located in the stack above or below the first die 102 a.
Each tile 104 a-p is communicatively coupled with the tile's neighboring tiles on the die 400. In this example, the first tile 104 a is communicatively coupled with the second tile 104 b. However, tiles 104 a-p can be communicatively coupled in any configuration. Tiles 104 a-p on neural network die 102 a can be connected together using wired connections. The wired connections enable transmission of data between each connected tile 104 a-p.
Each tile communicates with one or more adjacent tiles 104 a-p to create a Hamiltonian circuit representation using the tiles 104 a-p. The circuit includes a communication scheme such that there is an uninterrupted flow of tile inputs connected to tile outputs, from the beginning of the ring bus to the end of the ring bus. For example, the tiles 104 a-p are configured such that the input and output of each functional tile within the ring network is connected to another functional tile or external source according to a dataflow configuration. The dataflow configuration describes a path of computational data propagation through the tiles 104 a-p within a three-dimensional neural network architecture.
For example, and referring to FIG. 4, in some implementations, a dataflow configuration 402 may specify that a first tile 104 a, on a die 102 a, receives input data from an external source. In some implementations, the external source can be a tile 104 a-b on a different neural network accelerator die 102 b-e, or some other source that transmits data. The first tile 104 a executes computations using the data and transmits the data to a second tile 104 b. Likewise, the second tile 104 b computes the data, and transmits the data to a third tile 104 c. In this implementation, the process continues along the first row of tiles until the data reaches a fourth tile 104 d. The fourth tile 104 d transmits the data to a fifth tile 104 h.
The process continues until the data reaches tile 104 e. In like manner, data is transmitted to the ninth tile 104 i. The data is propagated along across the third row of tiles to the twelfth tile 104 l. The twelfth tile 104 l transmits the data to the thirteenth tile 104 p. The dataflow configuration continues to transmit the data to the sixteenth tile 104 m, where the sixteenth tile 104 m transmits the data to an external source or back to the first tile 104 a. In this implementation, the dataflow configuration for the tiles 104 a-p on the first die 102 a is 104 a-b-c-d-h-g-f-e-i-j-k-l-p-o-n-m. In other implementations, the dataflow configuration can be a different path of data travel through the set of tiles. The dataflow configuration is specified based on which switch input is connected to which tile's output. Because there are a plurality of tiles 104 a-p, each tile with a respective output, and because the switch's input can be varied to receive different tiles' outputs, many different dataflow configurations can be achieved.
In some instances, some tiles 104 a-p may be faulty after production of the three dimensionally stacked neural network accelerator 100. Post-production tests are executed after stacking the dies to identify which tiles are faulty. The identified faulty tiles are bypassed, and a dataflow configuration is established that eliminates the use of the faulty tiles. The tile configuration includes a redundant data path 304 that can be implemented to bypass faulty tiles 104 a-p. Eliminating a faulty tile(s) can include directing computational data to every other tile in the three-dimensionally stacked neural network except the identified faulty tiles. Each other tile will perform its designated computational functions as part performing the computation of the neural network. In this instance, the other tiles can collectively perform the computations of the identified faulty tiles or one tile can be dedicated to perform the computations of the identified faulty tile.
FIG. 5 illustrates an example dataflow configuration 500 for a three-dimensionally stacked neural network accelerator. In this example, the neural network accelerator includes two neural network dies, a top die 102 a and a bottom die 102 b arranged using a pair of inductively coupled connections to aggregate two rings together to form one ring of 16 tiles distributed over two chips stacked vertically. In some implementations, the three-dimensionally stacked neural network accelerator includes more than two dies stacked together (e.g., 3-10 dies). Each die includes a plurality of tiles 104 a-h. The tiles 104 a-h on both dies process data to perform neural network computations according to the dataflow configuration of data propagation through the tiles 104 a-h.
In this example, tile 104 f on the top die 102 a is a total die failure. A total die failure occurs when the bypass fails or the switch fails and the processing unit fails. In a two dimensional neural network accelerator example with one die having a failure scenario consistent with the top die 102 a illustrated in FIG. 5, the entire die would be a total failure because the dataflow configuration could not create a continuous dataflow path around the die. Because tile 104 f on the top die 102 a is a total die failure, neighboring tiles cannot output data to tile 104 f because there is no way for tile 104 f to output data to another tile 104 a-h. Thus, as illustrated in FIG. 5, no tiles output data to the input of tile 104 f and tile 104 f is completely bypassed by transmitting data to tiles adjacent and vertically adjacent to tile 104 f.
Tile 104 g on the top die 102 a and tile 104 c on the bottom die 102 b are partial failures. A partial failure occurs where a processing element 202 fails, but the switch and data path is still functional. A tile that is experiencing a partial failure can still receive input data because the tile's switch 204 is still functional. Therefore, the partially failed tile's switch 204 can output to data to different tiles 104 a-h using the transmitting inductive coils or output data using the processing element bypass 208. As illustrated in FIG. 5 tile 104 g on the top die 102 a and tile 104 c on the bottom die 102 b utilize the transmitting inductive coils to output data to the vertically adjacent tiles. However, in other implementations, tile 104 g on the top die 102 a and tile 104 c on the bottom die 102 b could either both or uniquely utilize the processing element bypass 208 to output data to adjacent tiles 104 a-h.
FIG. 6 is an example flow chart of a process executed by a system of one or more computers for modifying a dataflow configuration for tiles within a three-dimensionally stacked neural network accelerator 600. Aspects of FIG. 6 will be discussed in connection with FIG. 5. For example, the process can be performed by a host computer to which the accelerator is coupled.
Three-dimensionally stacked neural network accelerator tiles are tested to determine functionality of each of the plurality of tiles on the neural network wafers and identify faulty tiles. Techniques for die testing include using out of band control channels, for example, joint test action group scan chain, to test each of the tiles within the three-dimensionally stacked neural network accelerator. The test identifies which tiles are faulty and are to be bypassed to create the Hamiltonian circuit. In some implementations, the system is configured to determine the Hamiltonian circuit based on the arrangement of functional tiles 104 a-h within the three-dimensionally stacked neural network accelerator. In addition, the system can implement the dataflow configuration using the remaining functional tiles 104 a-h of the three dimensional neural network accelerator according to the determined Hamiltonian circuit.
The testing can occur prior to or after stacking the neural network wafers 102 a-b. For example, testing can occur prior to cutting the larger fabricated wafers into smaller dies designed for the three-dimensionally stacked-neural network accelerators. In this instance, each tile on the larger fabricated wafer is tested for functionality. Alternatively, after cutting larger fabricated wafers into dies designed for the three-dimensionally stacked-neural network accelerators, the dies are stacked together to create the three-dimensionally stacked neural network accelerator, and each tile is tested for functionality. In either instance, the tiles 104 a-p are tested prior to executing computations on the three-dimensionally stacked neural network accelerator.
In other implementations, three-dimensionally stacked neural network accelerators are constantly analyzed during operation of the neural network accelerator to determine tiles, that may have been operational during the initial functional testing, but have since failed or become faulty.
Referring to FIG. 6, the process includes determining that a tile from a plurality of tiles in a three-dimensional stacked neural network accelerator is a faulty tile (402). A faulty tile is a tile that does not function as designed based on the analyzing. As previously described, the three-dimensionally stacked neural network accelerator comprises a plurality of neural network accelerator dies 102 a-e stacked on top of each other and each neural network accelerator die includes a respective plurality of tiles 104 a-p. Each tile has a plurality of input and output connections that transmit data into and out of the tile. The data is used to execute neural network computations.
Referring back to FIG. 4, in the illustrated example 300, the sixth and seventh tile 104 c on the top die 102 a and the third tile on the bottom die 102 b has been determined to be faulty. Faulty tiles 104 f and g on the top die 102 a and 104 c on the bottom die 102 b are bypassed and removed from the dataflow configuration. Removing the faulty tiles includes modifying the dataflow configuration to transmit an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile (404).
In some implementations, bypassing the faulty tile includes removing power that is provided to the faulty tile 104 c. Power can be removed from the faulty tile 104 c by disconnecting a switch that provides power to the faulty tile or using programming logic to remove power provided to the tile. Removing power provided to the faulty tile 104 c ensures that the faulty tile is not operational and the faulty tile 104 c does not draw unnecessary power from a power source providing power to the three-dimensionally stacked neural network accelerator. Further, data is transmitted either around the faulty tile or through the faulty tile, but not to the processing element 202 of the faulty tile.
In other implementations, bypassing the faulty tile 104 c can include turning off a clock that is unique to the faulty tile. The faulty tile's clock can be disabled using programming logic or physically removing the clock from the circuit by disconnecting the clock's output. Turning off the faulty tile's clock stops the clock from executing processing functions, thereby deactivating the faulty tile and ceasing the faulty tile's operation.
According to the example illustrated in FIG. 5, the faulty tiles are bypassed and the dataflow configuration is modified according to a Hamiltonian circuit representation of the tile dataflow configuration. For example, one implementation can include having the first tile 104 a on the top die 102 receives input data from an external source (1). The first tile 104 a processes the data, and the first tile 104 a transmits output data to the switch of the second tile 104 b. The switch 202 of the second tile 104 b of the top die 102 a directs the data to the transmitting inductive coil (T) and transmits the data to the second tile 104 b of the bottom die 102 b. For ease of illustration, the inductive coils have been presented as single coils, however, as previously mentioned, each of the inductive coils are a plurality of coils communicatively coupled to transmit and receive data from other coils on vertically adjacent tiles.
The second tile 104 b of the bottom die 102 b processes the data (2) and transmits the data to the third tile 104 c of the bottom die 102 b. Since, the third tile 104 c of the bottom die 102 b was determined to be faulty, the processing element is bypassed and the data is sent to the transmitting inductive coil (T), which transmits the data using inductive coupling to the receiving inductive coil (R) of the third tile 104 c of the top die 102 b. The processing element processes the data (3) and transmits the data to the switch 202 of the fourth tile 104 d of the top die 102 a. The switch transmits the data to the transmitting inductive coil (T) of the fourth tile 104 d of the top die 102 a, which transmits the data to the fourth tile 104 d of the bottom die 102 b. The fourth tile 104 d of the bottom die 102 b processes the data (4) and the output is transmitted to the eight tile 104 h of the bottom die 102 b.
The processing element of the eighth tile 104 h processes the data and the output is transmitted to the switch of the seventh tile 104 g on the bottom die 102 b. The seventh tile 104 g processes the data (6) and the output is transmitted to the switch of the sixth tile 104 f of the bottom die 102 b. The sixth tile 104 f processes the data (7) and the output is transmitted to the switch of the fifth tile 104 e on the bottom die 102 b. The fifth tile's processing element processes the data (8) and the output is transmitted to the first tile 104 a of the bottom die 102 b. The first tile 104 a processes the data (9) and the output is transmitted to the switch of the second tile 104 b of the bottom die 102 b.
The second tile's switch transmits the data the second tile's transmitting inductive coil (T), which transmits the data to the receiving inductive coil (R) of the second tile 104 b of the top die 102 a. The second tile's processing element processes the data (10) and the output is transmitted to the switch of the third tile 104 c of the top die 102 b. The data is transmitted using the processing element bypass to bypass the third tile's processing element and the data is transmitted to the switch of the fourth tile 104 d of the top die 102 a. The fourth tile 104 d processes the data (11) and transmits the data to the eight tile 104 h of the top die 102 a.
The eight tile 104 h processes the data (12) and the output is transmitted to the switch of the seventh tile 104 g of the top die 102 a. As previously described, the seventh tile 104 g of the top die was determined to be faulty during testing. Therefore, the seventh tile's switch transmits the data to the transmitting inductive coil (T) of the seventh tile 104 g. The transmitting inductive coil (T) directs the data to the receiving inductive coil (R) of the seventh tile 104 g of the bottom die 102 b. The seventh tile's switch directs the data to the sixth tile 104 f of the bottom die 102 b using the processing element bypass. The switch of the sixth tile 104 f transmits the data to the switch of the fifth tile 104 e of the bottom die 102 b using the sixth tile's processing element bypass.
The fifth tile 104 e directs the data to the transmitting inductive coil (T) of the fifth tile 104 e, which transmits the data to the receiving inductive coil (R) of the fifth tile 104 e of the top die 102 a. The fifth tile 104 e processes the data (13) and the output is either directed back to the first tile 102 a of the top die 102 a or directed to an external device. During testing of the three-dimensionally stacked neural network accelerator, the sixth die 104 f of the top die 102 a was identified as faulty. In this example, no data was routed to the sixth die 104 f and the sixth die 104 f was completely bypassed.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively, or in addition, the program instructions can be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, which is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output(s). The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array), an ASIC (application specific integrated circuit), or a GPGPU (General purpose graphics processing unit).
Computers suitable for the execution of a computer program include, by way of example, can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.

Claims (18)

What is claimed is:
1. A circuit, comprising:
a plurality of dies vertically stacked on top of each other,
each die including a plurality of tiles,
each of the plurality of tiles comprising a multiplexer,
wherein the multiplexer of each tile is connected to outputs of a plurality of other tiles in the plurality of tiles and controls which output of the outputs of the plurality of other tiles the tile receives as an input;
wherein the plurality of tiles includes a first tile that has been determined to be a faulty tile;
wherein a dataflow configuration is configured to route the output of the tile prior to the faulty tile in an initial dataflow configuration to an input of a tile vertically adjacent to the faulty tile instead of to an input of the faulty tile; and
wherein each tile communicates with one or more adjacent tiles to create a ring network of tiles, wherein the ring network is configured such that each functional tile within the ring network receives and transfers data.
2. The circuit of claim 1, further comprising inductive coupling between tiles on different dies enabling communication between the tiles on different dies.
3. The circuit of claim 1, further comprising, for each multiplexer, a respective controller coupled to the multiplexer, wherein the controller is configured to transmit instructions to the multiplexer to designate an active input for the multiplexer.
4. The circuit of claim 1, further comprising, for each wafer, a respective controller coupled to each multiplexer included on the wafer, wherein the controller is configured to transmit instructions to each multiplexer included on the wafer to designate an active input for each multiplexer.
5. The circuit of claim 1, further comprising, a controller coupled to each multiplexer, wherein the controller is configured to transmit instructions to each multiplexer to designate an active input for each multiplexer.
6. The circuit of claim 1, wherein to define the dataflow configuration to route the output of the tile prior to the faulty tile in the initial dataflow configuration to an input of a tile vertically adjacent to the faulty tile further, the input of the multiplexer of the tile vertically adjacent to faulty tile is set to receive the output of the tile prior to the faulty tile.
7. The circuit of claim 1, wherein the faulty tile has been disabled.
8. The circuit of claim 7, wherein disabling the faulty tile comprises removing power that is distributed to the faulty tile.
9. The circuit of claim 7, wherein disabling the faulty tile comprises turning off a clock that is unique to the faulty tile.
10. A method, comprising:
obtaining data specifying that a tile from a plurality of tiles in a three-dimensionally stacked neural network accelerator is a faulty tile, wherein:
the three-dimensionally stacked neural network accelerator comprises a plurality of neural network dies stacked on top of each other, each neural network die including a respective plurality of tiles,
each tile has input and output connections that route data into and out of the tile,
the three-dimensionally stacked neural network accelerator is configured to process inputs by routing the input through each of the plurality of tiles according to a dataflow configuration, and
each tile communicates with one or more adjacent tiles to create a ring network of tiles, wherein the ring network is configured such that each functional tile within the ring network receives and transfers data; and
modifying the dataflow configuration to route an output of a tile before the faulty tile in the dataflow configuration to an input connection of a tile that is positioned above or below the faulty tile on a different neural network die than the faulty tile.
11. The method of claim 10, wherein the neural network accelerator further comprises a respective multiplexer for each of the plurality of tiles that control routing of data between tiles, and wherein modifying the dataflow configuration comprises configuring the multiplexer for the tile that is positioned above or below the faulty tile on the different neural network die than the faulty tile to cause the output of the tile before the faulty tile to be routed to the input connection of the tile above or below the faulty tile.
12. The method of claim 10, further comprising disabling the faulty tile.
13. The method of claim 12, wherein disabling the faulty tile comprises removing power that is distributed to the faulty tile.
14. The method of claim 12, wherein disabling the faulty tile comprises turning off a clock that is unique to the faulty tile.
15. The method of claim 10, further comprising analyzing functionality of each of the plurality of tiles on the neural network dies to determine that the tile is a faulty tile.
16. The method of claim 15, wherein determining the tile is faulty comprises determining that the tile does not function as designed based on the analyzing.
17. The method of claim 10, wherein each tile communicates with adjacent tiles above or below the tile using inductive coupling.
18. The method of claim 10, further comprising, routing the output of the tile above or below the faulty tile to an input of a tile after the faulty tile in the dataflow configuration.
US15/685,672 2017-08-24 2017-08-24 Yield improvements for three-dimensionally stacked neural network accelerators Active 2039-07-12 US10963780B2 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US15/685,672 US10963780B2 (en) 2017-08-24 2017-08-24 Yield improvements for three-dimensionally stacked neural network accelerators
EP18766435.4A EP3635556A1 (en) 2017-08-24 2018-08-22 Yield improvements for three-dimensionally stacked neural network accelerators
PCT/US2018/047468 WO2019040587A1 (en) 2017-08-24 2018-08-22 Yield improvements for three-dimensionally stacked neural network accelerators
CN202311177479.1A CN117408324A (en) 2017-08-24 2018-08-22 Yield improvement of three-dimensional stacked neural network accelerator
CN201880038452.5A CN110730955B (en) 2017-08-24 2018-08-22 Yield improvement of three-dimensional stacked neural network accelerator
TW107129464A TWI698809B (en) 2017-08-24 2018-08-23 Yield improvements for three-dimensionally stacked neural network accelerators
US17/213,871 US11836598B2 (en) 2017-08-24 2021-03-26 Yield improvements for three-dimensionally stacked neural network accelerators
US18/527,902 US20240220773A1 (en) 2017-08-24 2023-12-04 Yield improvements for three-dimensionally stacked neural network accelerators

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/685,672 US10963780B2 (en) 2017-08-24 2017-08-24 Yield improvements for three-dimensionally stacked neural network accelerators

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/213,871 Continuation US11836598B2 (en) 2017-08-24 2021-03-26 Yield improvements for three-dimensionally stacked neural network accelerators

Publications (2)

Publication Number Publication Date
US20190065937A1 US20190065937A1 (en) 2019-02-28
US10963780B2 true US10963780B2 (en) 2021-03-30

Family

ID=63528901

Family Applications (3)

Application Number Title Priority Date Filing Date
US15/685,672 Active 2039-07-12 US10963780B2 (en) 2017-08-24 2017-08-24 Yield improvements for three-dimensionally stacked neural network accelerators
US17/213,871 Active 2038-10-13 US11836598B2 (en) 2017-08-24 2021-03-26 Yield improvements for three-dimensionally stacked neural network accelerators
US18/527,902 Pending US20240220773A1 (en) 2017-08-24 2023-12-04 Yield improvements for three-dimensionally stacked neural network accelerators

Family Applications After (2)

Application Number Title Priority Date Filing Date
US17/213,871 Active 2038-10-13 US11836598B2 (en) 2017-08-24 2021-03-26 Yield improvements for three-dimensionally stacked neural network accelerators
US18/527,902 Pending US20240220773A1 (en) 2017-08-24 2023-12-04 Yield improvements for three-dimensionally stacked neural network accelerators

Country Status (5)

Country Link
US (3) US10963780B2 (en)
EP (1) EP3635556A1 (en)
CN (2) CN110730955B (en)
TW (1) TWI698809B (en)
WO (1) WO2019040587A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11232348B2 (en) 2017-04-17 2022-01-25 Cerebras Systems Inc. Data structure descriptors for deep learning acceleration
US11232347B2 (en) 2017-04-17 2022-01-25 Cerebras Systems Inc. Fabric vectors for deep learning acceleration
US11328208B2 (en) * 2018-08-29 2022-05-10 Cerebras Systems Inc. Processor element redundancy for accelerated deep learning
US11488004B2 (en) 2017-04-17 2022-11-01 Cerebras Systems Inc. Neuron smearing for accelerated deep learning
US11934945B2 (en) 2017-02-23 2024-03-19 Cerebras Systems Inc. Accelerated deep learning

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6686049B2 (en) * 2016-02-18 2020-04-22 ウルトラメモリ株式会社 Stacked semiconductor device and data communication method
US10963780B2 (en) * 2017-08-24 2021-03-30 Google Llc Yield improvements for three-dimensionally stacked neural network accelerators
CN111860815A (en) * 2017-08-31 2020-10-30 中科寒武纪科技股份有限公司 Convolution operation method and device
US10840741B2 (en) * 2018-03-30 2020-11-17 Integrated Device Technology, Inc. Wireless power multiple receive coil self-startup circuit for low battery condition
US12019527B2 (en) * 2018-12-21 2024-06-25 Graphcore Limited Processor repair
US10691182B2 (en) * 2019-05-20 2020-06-23 Intel Corporation Layered super-reticle computing: architectures and methods
GB2586278B (en) 2019-08-16 2022-11-23 Siemens Ind Software Inc Addressing mechanism for a system on chip
GB2586277B (en) * 2019-08-16 2022-11-23 Siemens Ind Software Inc Broadcasting event messages in a system on chip
GB2586279B (en) * 2019-08-16 2022-11-23 Siemens Ind Software Inc Routing messages in a integrated circuit chip device
US11610101B2 (en) 2019-08-30 2023-03-21 International Business Machines Corporation Formation failure resilient neuromorphic device
CN112860597B (en) * 2019-11-27 2023-07-21 珠海格力电器股份有限公司 Neural network operation system, method, device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5422983A (en) 1990-06-06 1995-06-06 Hughes Aircraft Company Neural engine for emulating a neural network
CN1191626A (en) 1995-07-26 1998-08-26 汤姆森多媒体公司 Color CRT comprising uniaxial tension focus mask
US7426501B2 (en) 2003-07-18 2008-09-16 Knowntech, Llc Nanotechnology neural network methods and systems
US7804504B1 (en) 2004-12-13 2010-09-28 Massachusetts Institute Of Technology Managing yield for a parallel processing integrated circuit
TWI527132B (en) 2010-09-02 2016-03-21 奧瑞可國際公司 Chip package, electronic computing device and method for communicating a signal
US20160323137A1 (en) 2014-04-25 2016-11-03 International Business Machines Corporation Yield tolerance in a neurosynaptic system
US20160350645A1 (en) * 2015-05-29 2016-12-01 Samsung Electronics Co., Ltd. Data-optimized neural network traversal
US20160364644A1 (en) * 2015-06-10 2016-12-15 Samsung Electronics Co., Ltd. Spiking neural network with reduced memory access and reduced in-network bandwidth consumption
TWI585584B (en) 2011-12-29 2017-06-01 英特爾公司 Multi-level memory with direct access
US20180246853A1 (en) * 2017-02-28 2018-08-30 Microsoft Technology Licensing, Llc Hardware node with matrix-vector multiply tiles for neural network processing
US10650286B2 (en) * 2017-09-07 2020-05-12 International Business Machines Corporation Classifying medical images using deep convolution neural network (CNN) architecture

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3120157B2 (en) * 1991-07-08 2000-12-25 株式会社日立製作所 Loop logical channel control method
US9092595B2 (en) * 1997-10-08 2015-07-28 Pact Xpp Technologies Ag Multiprocessor having associated RAM units
NO308149B1 (en) * 1998-06-02 2000-07-31 Thin Film Electronics Asa Scalable, integrated data processing device
JP2003223236A (en) * 2002-01-30 2003-08-08 Matsushita Electric Ind Co Ltd Data processing system
US20080320069A1 (en) * 2007-06-21 2008-12-25 Yi-Sheng Lin Variable length fft apparatus and method thereof
US8933447B1 (en) * 2010-05-12 2015-01-13 Xilinx, Inc. Method and apparatus for programmable device testing in stacked die applications
US8493089B2 (en) * 2011-04-06 2013-07-23 International Business Machines Corporation Programmable logic circuit using three-dimensional stacking techniques
US8990616B2 (en) 2012-09-28 2015-03-24 International Business Machines Corporation Final faulty core recovery mechanisms for a two-dimensional network on a processor array
WO2014104726A1 (en) 2012-12-26 2014-07-03 전자부품연구원 Method for providing user interface using one-point touch and apparatus for same
CN106462803B (en) * 2014-10-16 2019-12-10 谷歌有限责任公司 Enhancing neural networks with external memory
US9882562B1 (en) * 2016-12-07 2018-01-30 Xilinx, Inc. Rotated integrated circuit die and chip packages having the same
US10467183B2 (en) * 2017-07-01 2019-11-05 Intel Corporation Processors and methods for pipelined runtime services in a spatial array
US10963780B2 (en) * 2017-08-24 2021-03-30 Google Llc Yield improvements for three-dimensionally stacked neural network accelerators

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5422983A (en) 1990-06-06 1995-06-06 Hughes Aircraft Company Neural engine for emulating a neural network
CN1191626A (en) 1995-07-26 1998-08-26 汤姆森多媒体公司 Color CRT comprising uniaxial tension focus mask
US7426501B2 (en) 2003-07-18 2008-09-16 Knowntech, Llc Nanotechnology neural network methods and systems
US7804504B1 (en) 2004-12-13 2010-09-28 Massachusetts Institute Of Technology Managing yield for a parallel processing integrated circuit
TWI527132B (en) 2010-09-02 2016-03-21 奧瑞可國際公司 Chip package, electronic computing device and method for communicating a signal
TWI585584B (en) 2011-12-29 2017-06-01 英特爾公司 Multi-level memory with direct access
US20160323137A1 (en) 2014-04-25 2016-11-03 International Business Machines Corporation Yield tolerance in a neurosynaptic system
US20160350645A1 (en) * 2015-05-29 2016-12-01 Samsung Electronics Co., Ltd. Data-optimized neural network traversal
US20160364644A1 (en) * 2015-06-10 2016-12-15 Samsung Electronics Co., Ltd. Spiking neural network with reduced memory access and reduced in-network bandwidth consumption
US20180246853A1 (en) * 2017-02-28 2018-08-30 Microsoft Technology Licensing, Llc Hardware node with matrix-vector multiply tiles for neural network processing
US10650286B2 (en) * 2017-09-07 2020-05-12 International Business Machines Corporation Classifying medical images using deep convolution neural network (CNN) architecture

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
Campbell et al. "3-D Wafer Scale Architectures for Neural Network Computing," IEEE Transactions on Components, Hybrids and Manufacturing Technology, Vol16(7), Nov. 1, 1993, 10 pages.
Chen et al., "DaDianNao: A Machine-Learning Supercomputer," Proceedings of the 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE Computer Society, 2014, 14 pages.
Chen et al., "DianNao: A Small-Footprint High-Throughput Accelerator for Ubiquitous Machine-Learning," ACM Sigplan Notices, 2014, vol. 49. No. 4. ACM, 15 pages.
Chi et al., "Prime: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory," Proceedings of ISCA, 2016, vol. 43, 13 pages.
Kuroda. "ThruChip Interface for 3D system integration" IEEE 2010 International Symposium on VLSI Technology Systems and Applications, Apr. 26, 2010, 1 page.
Nomura et al. "3D Shared Bus Architecture Using Inductive Coupling Interconnect," IEEE 9TH International Symposium on Embedded Multicore/Many-Core Systems-on-Chip, Sep. 23, 2015, 8 pages.
Octcharov et al. "Accelerating Deep Convolutional Neural Networks Using Specialized Hardware," Microsoft Research Whitepaper 2(11), Feb. 2015, 4 pages.
PCT International Application No. PCT/US2018/047468, dated Dec. 4, 2018, 18 pages.
PCT International Preliminary Report on Patentability in International Application No. PCT/US2018/047468, dated Feb. 25, 2020, 11 pages.
TW Office Action in Taiwan Application No. 107129464, dated Aug. 22, 2019, 17 pages (with English translation).
U.S. Appl. No. 15/335,769, filed Oct. 27, 2016, Temam et al.
Zhang et al. "Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Networks," Proceedings of the 2015 ACM. SIGDA International Symposium on Field-Programmable Gate Arrays. ACM, Feb. 22, 2015, 10 pages.

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11934945B2 (en) 2017-02-23 2024-03-19 Cerebras Systems Inc. Accelerated deep learning
US11232348B2 (en) 2017-04-17 2022-01-25 Cerebras Systems Inc. Data structure descriptors for deep learning acceleration
US11232347B2 (en) 2017-04-17 2022-01-25 Cerebras Systems Inc. Fabric vectors for deep learning acceleration
US11475282B2 (en) 2017-04-17 2022-10-18 Cerebras Systems Inc. Microthreading for accelerated deep learning
US11488004B2 (en) 2017-04-17 2022-11-01 Cerebras Systems Inc. Neuron smearing for accelerated deep learning
US11328208B2 (en) * 2018-08-29 2022-05-10 Cerebras Systems Inc. Processor element redundancy for accelerated deep learning

Also Published As

Publication number Publication date
CN110730955A (en) 2020-01-24
TW201921297A (en) 2019-06-01
US20240220773A1 (en) 2024-07-04
EP3635556A1 (en) 2020-04-15
US20190065937A1 (en) 2019-02-28
US20210216853A1 (en) 2021-07-15
WO2019040587A1 (en) 2019-02-28
CN110730955B (en) 2023-10-10
CN117408324A (en) 2024-01-16
TWI698809B (en) 2020-07-11
US11836598B2 (en) 2023-12-05

Similar Documents

Publication Publication Date Title
US11836598B2 (en) Yield improvements for three-dimensionally stacked neural network accelerators
US11948060B2 (en) Neural network accelerator tile architecture with three-dimensional stacking
US10262911B1 (en) Circuit for and method of testing bond connections between a first die and a second die
US20070247189A1 (en) Field programmable semiconductor object array integrated circuit
US10741524B2 (en) Redundancy scheme for a 3D stacked device
CN106252325A (en) The hybrid redundancy scheme of interconnection between tube core in Multi-chip packages
CN104603942A (en) Flexible sized die for use in multi-die integrated circuit
US9389945B1 (en) Test access architecture for stacked dies
JP2022543814A (en) Network computer with two built-in rings
US11489527B2 (en) Three dimensional programmable logic circuit systems and methods
US9348357B2 (en) Stitchable global clock for 3D chips
US20200293478A1 (en) Embedding Rings on a Toroid Computer Network
US10763181B2 (en) Semiconductor device and repair operation method thereof
CN110491850A (en) A kind of TSV failure tolerant method based on interval grouping
Gupta et al. Reconfigurable multipipelines for vector supercomputers
US10879903B2 (en) Distributed I/O interfaces in modularized integrated circuit devices
Sion et al. Defect diagnosis algorithms for a field programmable interconnect network embedded in a very large area integrated circuit
US11803681B1 (en) Wafer-scale large programmable device
EP3974988B1 (en) Self-healing system architecture based on reversible logic
Lee et al. A novel DFT architecture for 3DIC test, diagnosis and repair
US20240213985A1 (en) Systems And Methods For Configuring Signal Paths In An Interposer Between Integrated Circuits
JP2019092020A (en) TSV Error Tolerant Router Device for 3D Network On Chip
Kajiwara et al. A novel three-dimensional FPGA architecture with high-speed serial communication links
KR100854662B1 (en) Process connecting method of robot control module

Legal Events

Date Code Title Description
AS Assignment

Owner name: GOOGLE INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:NOWATZYK, ANDREAS GEORG;TEMAM, OLIVIER;REEL/FRAME:043400/0180

Effective date: 20170814

AS Assignment

Owner name: GOOGLE LLC, CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:GOOGLE INC.;REEL/FRAME:044567/0001

Effective date: 20170929

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: AWAITING TC RESP., ISSUE FEE NOT PAID

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4