WO2019195660A1 - Systems and methods for efficient matrix multiplication - Google Patents
Systems and methods for efficient matrix multiplication Download PDFInfo
- Publication number
- WO2019195660A1 WO2019195660A1 PCT/US2019/025961 US2019025961W WO2019195660A1 WO 2019195660 A1 WO2019195660 A1 WO 2019195660A1 US 2019025961 W US2019025961 W US 2019025961W WO 2019195660 A1 WO2019195660 A1 WO 2019195660A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- electrodes
- analog
- input
- matrix
- voltages
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06G—ANALOGUE COMPUTERS
- G06G7/00—Devices in which the computing operation is performed by varying electric or magnetic quantities
- G06G7/12—Arrangements for performing computing operations, e.g. operational amplifiers
- G06G7/16—Arrangements for performing computing operations, e.g. operational amplifiers for multiplication or division
- G06G7/163—Arrangements for performing computing operations, e.g. operational amplifiers for multiplication or division using a variable impedance controlled by one of the input signals, variable amplification or transfer function
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/065—Analogue means
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/54—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using elements simulating biological cells, e.g. neuron
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0004—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements comprising amorphous/crystalline phase transition cells
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/0023—Address circuits or decoders
- G11C13/0026—Bit-line or column circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/0023—Address circuits or decoders
- G11C13/0028—Word-line or row circuits
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/003—Cell access
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/004—Reading or sensing circuits or methods
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/0069—Writing or programming circuits or methods
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C7/00—Arrangements for writing information into, or reading information out from, a digital store
- G11C7/10—Input/output [I/O] data interface arrangements, e.g. I/O data control circuits, I/O data buffers
- G11C7/1006—Data managing, e.g. manipulating data before writing or reading out, data bus switches or control circuits therefor
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C13/00—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00
- G11C13/0002—Digital stores characterised by the use of storage elements not covered by groups G11C11/00, G11C23/00, or G11C25/00 using resistive RAM [RRAM] elements
- G11C13/0021—Auxiliary circuits
- G11C13/0069—Writing or programming circuits or methods
- G11C2013/0073—Write using bi-directional cell biasing
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2213/00—Indexing scheme relating to G11C13/00 for features not covered by this group
- G11C2213/10—Resistive cells; Technology aspects
- G11C2213/18—Memory cell being a nanowire having RADIAL composition
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2213/00—Indexing scheme relating to G11C13/00 for features not covered by this group
- G11C2213/10—Resistive cells; Technology aspects
- G11C2213/19—Memory cell comprising at least a nanowire and only two terminals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C2213/00—Indexing scheme relating to G11C13/00 for features not covered by this group
- G11C2213/70—Resistive array aspects
- G11C2213/77—Array wherein the memory element being directly connected to the bit lines and word lines without any access device being used
Definitions
- This invention relates generally to computer hardware, and in particular to accelerators designed for performing efficient matrix operations in fields such as artificial intelligence and memory devices.
- Matrix operations are used in a variety of modem computing tasks. Many physical phenomena can be represented by one or more matrices of numerical values and processed in modem computers. For example, still photographs, video image frames, sensor output data, an interval of speech, financial transaction data, autonomous driving sensor data, and many other physical objects or parameters can be represented by one or more matrices of numerical values suitable for processing, manipulation and operation in modem computers. While general-purpose computing hardware can be used to perform matrix operations, the characteristics of matrix data and matrix operations can make them good candidates for designing hardware customized to more efficiently process matrix workloads and matrix operations compared to general-purpose computers. One form of matrix operation frequently used in modem computing tasks is digital vector-matrix multiplication.
- a system of sparse vector-matrix multiplication includes: a silicon substrate; a circuit layer formed in or on the substrate; a plurality of electrodes formed on the circuit layer; and a mesh formed randomly on the plurality of electrodes, wherein the circuit layer is configured to: receive a plurality of digital input signals; convert the plurality of digital input signals to a plurality of analog input signals; write the plurality of analog input signals on an input set of the plurality of electrodes; read from an output set of the plurality of electrodes a plurality of analog output signals, convert the plurality of analog output signals to a plurality of digital output signals, and output the plurality of digital output signals.
- the mesh includes coaxial nanowires having a metal core wrapped in two-terminal non-volatile memory (NVM) material.
- NVM non-volatile memory
- the non-volatile memory material includes a voltage-controlled resistance.
- the circuit layer includes: an input register configured to receive the plurality of the digital input signals; one or more digital to analog converters configured to convert the plurality of digital input signals to a plurality of analog input signals; one or more analog to digital converters configured to convert the plurality of analog output signals to the plurality of digital output signals; and an output register configured to receive and store the plurality of digital output signals.
- the circuit layer further includes a column driver and a row driver configured to selectively provide biasing voltages and/or training voltages to the plurality of the electrodes.
- the plurality of analog input signals include voltages and the plurality of analog output signals include currents, or vice versa.
- the plurality of analog input signals include voltages
- the plurality of analog output signals include currents
- the circuit layer further includes: a plurality of amplifiers coupled to the plurality of electrodes, wherein amplifiers coupled to the input set of the plurality of electrodes are configured as a sample-and-hold (SAH) amplifier and configured to write the plurality of analog input signals to the input set, and amplifiers coupled to the output set of the plurality of the electrodes are configured as current-sensing amplifiers and configured to read the plurality of analog output signals.
- SAH sample-and-hold
- the plurality of electrodes include neurons in a neural network layer.
- the plurality of electrodes and the randomly formed mesh include a matrix of conductances.
- the matrix of conductances is tunable using one or more of temperature-driven phase-change memory mechanisms, unipolar resistive switching, and bipolar memristive mechanisms.
- a method of sparse vector-matrix multiplication includes: providing a plurality of electrodes on a silicon substrate; forming a layer of randomly arranged coaxial nanowires on the plurality of electrodes; receiving a plurality of digital input signals; converting the plurality of digital input signals to a plurality of analog input signals; writing the plurality of analog input signals on an input set of the plurality of electrodes; reading from an output set of the plurality of electrodes a plurality of analog output signals; converting the plurality of analog output signals to a plurality of digital output signals; and outputting the plurality of digital output signals.
- the coaxial nanowires include a metal core wrapped in two-terminal non-volatile memory (NYM) material.
- NVM non-volatile memory
- the NVM material includes one or more of a voltage-controlled resistance, memristor, phase-change material (PCM), and resistive random-access-memory (ReRAM) material.
- the method further includes: selectively providing biasing voltages to the plurality of the electrodes to enable writing voltages into or reading currents from the plurality of the electrodes.
- voltage-controlled resistances are formed at intersections of the plurality of the electrodes and the randomly arranged coaxial nanowires and the method further comprises selectively providing training voltages to the plurality of the electrodes to adjust the voltage-controlled resistances.
- the method further includes receiving a training signal indicating which electrodes in the plurality of the electrodes are to be applied the training voltages.
- the plurality of the electrodes include neurons in a neural network layer.
- the plurality of the electrodes and the layer of randomly arranged coaxial nanowires form a matrix of conductances and the conductances are tuned by performing gradient descent.
- the plurality of analog input signals include voltages and the plurality of analog output signals include currents.
- the input and output sets each comprise half of the electrodes of the plurality of electrodes.
- FIG. 1 illustrates a diagram of a matrix in a dot-product engine used to perform vector-matrix multiplication.
- FIG. 2 illustrates a diagram of a coaxial nanowire, which can be utilized in building high efficiency computing hardware.
- FIG. 3 illustrates a diagram of a sparse vector-matrix multiplication (SVMM) engine according to an embodiment.
- FIG. 4 illustrates a diagram of an embodiment of the circuit layer and the electrodes of the embodiment of FIG. 3.
- FIG. 5 illustrates a flow chart of a method of sparse vector-matrix multiplication according to an embodiment.
- the term“about” as used herein refers to the ranges of specific measurements or magnitudes disclosed.
- the phrase“about 10” means that the number stated may vary as much as 1%, 3%, 5%, 7%, 10%, 15% or 20%. Therefore, at the variation range of 20% the phrase“about 10” means a range from 8 to 12.
- processor can refer to various microprocessors, controllers, and/or hardware and software optimized for loading and executing software programming instructions or processors including graphics processing units (GPUs) optimized for handling high volume matrix data related to image processing.
- GPUs graphics processing units
- the term“conductance” refers to the degree by which a component conducts electricity. Conductance can be calculated as the ratio of the current that flows through the component to the potential difference present across the component. Conductance is the reciprocal of the resistance and is measured in siemens. [0040]
- the term“dense” in the context of matrix multiplication engines described herein can refer to engines where there is an electrical connection or path from each input to each output node of the matrix multiplication engine.
- One example of hardware specialized for performing vector-matrix multiplication is a dot-product-engine based on a crossbar architecture.
- FIG. 1 illustrates a diagram of a matrix 20 in a dot-product engine used to perform vector-matrix multiplication.
- Matrix 20 utilizes a crossbar array architecture and includes horizontal input voltage lines intersecting vertical output current lines.
- the input/output voltage/current lines can be neurons in a neural network layer, when matrix 20 is used to perform vector-matrix multiplication in the context of neural networks.
- the input voltage lines and output current lines are made of conductive metal material.
- a material made of non-volatile memory (NVM) 21 connects the input voltage lines to the intersecting output current lines.
- NVM non-volatile memory
- this is achieved via lithographically patterning electrode lines (horizontal and vertical lines) to sandwich an NVM-type material 21.
- the vector of input voltages is applied on the input voltage lines.
- the output current at each column is determined by the sum of currents from each intersection of that column with input voltage lines and determined by applying Kirchhoff s current law (KCL) for each intersection.
- KCL Kirchhoff s current law
- the matrix 20 is partially formed by NVM material 21 whose resistances are controllable by applying an appropriate voltage. Therefore, a matrix of parameter values (e.g ., matrix of weights in a layer of a neural network) can be constructed in the matrix 20 by adjusting the intersection resistances to match the matrix of parameter values of a desired computation.
- a matrix of parameter values e.g ., matrix of weights in a layer of a neural network
- the dot-product engine utilizing matrix 20 can be characterized as a dense array structure, where each input and output are connected.
- the chip area required to implement the matrix 20 scales quadratically relative to the number of input and output neurons it provides.
- input/output neurons and chip area needed to implement the matrix 20 scale at different rates. While input/output neurons scale linearly (on the edges of matrix 20), the chip area needed to implement vector-matrix multiplication of those additional neurons grows quadratically (in the area of the matrix 20).
- FIG. 2 illustrates a diagram of a coaxial nano wire 10, which as will be described can be utilized in building high efficiency computing hardware.
- the coaxial nanowire 10 includes a metal core 12 wrapped in a two-terminal non volatile memory (NVM) material 14.
- NVM non volatile memory
- the coaxial nanowire 10 touches two metal electrodes 16 and 18.
- the NVM material is a two-terminal device, whose resistance is controlled by voltages applied above or below some threshold voltages across the two terminals. For example, when the electrode 16 applies a voltage above a positive threshold voltage (SET-voltage) to the NVM material 14, the NVM material 14 may undergo dielectric breakdown and one or more conductive filaments are formed through it, thereby lowering its electrical resistance and increasing its conductivity. Subsequently, the electrical connection between the electrodes 16 and 18 can be strengthened via the now more-conductive NVM material 14 and the metal core 12.
- SET-voltage positive threshold voltage
- the dielectric breakdown process is reversed, the filaments dissolve away and the electrical resistance of the NVM material 14 reverts to its original value or some other lower resistance, thereby weakening the electrical connection between the electrodes 16 and 18.
- the NVM material 14 is transformed to low resistance state (LRS) and for voltages below the RESET- voltage, the NVM material 14 is transformed to high resistance state (HRS).
- LRS low resistance state
- HRS high resistance state
- coaxial nano wire 10 forms a memory device at the intersection of its contact with an electrode. The resistance at the interface is dependent upon previously applied voltage (if the previous voltage was above the SET-voltage or below the RESET voltage).
- NVM material 14 includes, memristors, phase-change material (PCM), resistive random-access-memory (ReRAM) material, or any other material whose resistance is voltage-controlled, including any material which retains a resistance in response to an applied voltage with respect to one or more threshold voltages.
- PCM phase-change material
- ReRAM resistive random-access-memory
- Matrix multiplication is used in many modem computing tasks, such as artificial intelligence (AI), machine learning, neural network, neural network training, various transforms ( e.g . , Discrete Fourier Transform), and others.
- AI artificial intelligence
- machine learning e.g .
- neural network e.g .
- neural network training e.g .
- various transforms e.g . , Discrete Fourier Transform
- the non-volatile and controllable memory properties of the shell of coaxial nano wire 10 can be exploited to make hardware that can efficiently perform matrix multiplication.
- a form of matrix multiplication used in modem computing tasks is digital vector-matrix multiplication, where a vector of input values is multiplied by a matrix of parameter values.
- the multiplication yields an output vector.
- the coaxial nanowire 10 can be used to construct the matrix of parameter values.
- Parameter values can be any parameter values used in various computing tasks, for example weights in a layer of neural network.
- an alternative engine for performing vector-matrix multiplication can use electrodes, distributed over a chip area, sparsely connected with coaxial nanowires 10, where electrodes can distribute input and output nodes over the chip area where they exist as opposed to only the lateral edges of a crossbar array as is the case in the dot-product engine of FIG. 1.
- a network of distributed electrodes sparsely-connected with coaxial nanowires 10 can construct a conductance matrix, which can be used as a parameter matrix in desired computations.
- a subset of the electrodes can be used to feed a vector of input voltages and the complementary subset of the electrodes can be probed to read a vector of output currents.
- the output vector of currents is the result of vector- matrix multiplication of the vector of input voltages with the matrix of conductances according to Ohm’s law.
- FIG. 3 illustrates a diagram of a sparse vector-matrix multiplication (SVMM) engine 22.
- the SVMM engine 22 includes a silicon substrate 24, control circuitry within a circuit layer 26, for example, a complementary metal-oxide- semiconductor (CMOS) layer, a grid of electrodes 28 and a randomly formed mesh 30 of coaxial nanowires 10 deposited on top of the grid 28.
- Mesh 30 is placed above or formed on top of the electrode grid 28, providing physical contact between the mesh 30 and the top of electrode grid 28.
- the electrodes of the grid 28 can be grown through the mesh 30 as pillars of metal.
- the coaxial nanowires 10 deposited randomly on top of the electrodes of the grid 28 can provide electrical connections between the electrodes that they contact. Consequently, the coaxial nano wires 10 sparsely connect the electrodes of the grid 28.
- the strength of the electrical connections between the electrodes can be modulated based on increasing or decreasing the resistances of the coaxial nano wires 10.
- the circuitry in the circuit layer 26 can be used to apply a SET-voltage or a RESET-voltage to some or all of the coaxial nanowires 10 in the mesh 30 via electrodes in the grid 28.
- the electrical resistances of the coaxial nanowires 10 in mesh 30 can increase or decrease depending on the voltages they receive via the electrodes in the grid 28, thereby strengthening or weakening the electrical connections between the electrodes of the grid 28.
- the coaxial nano wires 10 in mesh 30 are randomly formed, they can create random electrical connections between the electrodes in the grid 28 via the NVM-type material and the metal cores of the nanowires 10.
- the electrodes of the grid 28 are sparsely connected via the coaxial nanowires 10 of mesh 30.
- the grid 28, sparsely connected with the mesh 30 forms a sparsely connected matrix of conductances, which can be used for vector-matrix multiplication.
- a vector of input voltages can be applied to a subset of the electrodes in the grid 28 (the input electrodes) and the remainder of the electrodes (the output electrodes) can be used to read an output vector of currents.
- the output vector of currents can represent the output of a vector-matrix multiplication of the vector of input voltages with sparsely connected matrix of conductances formed by the grid 28 according to Ohm’s law.
- the resistances formed at the intersection of the electrodes of the grid 28 and the mesh 30 can be adjusted by tuning or fitting to known sets of input/output pairs until a useful matrix of conductances is formed.
- the matrix of conductances formed by the SVMM engine 22 is made of unknown or random resistances, formed by random connections between electrodes of the grid 28 via coaxial nanowires 10 of mesh 30, the conductances can be adjusted by applying a combination of SET-voltages and/or RESET-voltages to the electrodes of the grid 28 and observing the outputs.
- Various fitting techniques and algorithms may be used to determine the direction by which the electrode-mesh interface resistances should be adjusted.
- the interface resistances corresponding to the matrix of conductances formed by grid 28 and mesh 30, can be adjusted through a variety of means, including using voltage pulses at the electrodes of the grid 28 to switch or nudge the resistances according to temperature-driven phase-change memory mechanisms, unipolar resistive switching, or bipolar memristive mechanisms. These techniques can be used to tune the values of the conductance matrix to a task, for instance as a content-addressable memory (CAM), a neural network layer, or as a more general memory interconnect. Examples of algorithms, which can be used in connection with the SVMM engine 22, can be found in International Patent Application No. PCT/US2018/033669, filed on May 21, 2018 and titled,“DEEP LEARNING IN BIPARTITE MEMRISTIVE NETWORKS.” In one embodiment, gradient descent learning can be used to tune the conductance matrix of the SVMM engine 22.
- the shape, number and geometry of the grid 28 can be modified based on implementation.
- the electrodes need not be in a grid format.
- Various design and implementation considerations may dictate an alternative geometry of the SVMM engine 22 without departing from the spirit of the described technology.
- FIG. 4 illustrates a diagram of an embodiment of the circuit layer 26 and the electrodes of the grid 28.
- Circuit layer 26 can be implemented as a CMOS layer and can include components such as an input register 32, an output register 34, a column driver 36, a row driver 38, one or more digital to analog converters (DACs) 40, one or more analog to digital converters (ADCs) 42, amplifiers 46, switches 44 and other components and circuitry as may be used to implement the functionality of the SVMM engine 22 in the circuit layer 26.
- Electrodes of the grid 28 are shown for illustration purposes, but in some embodiments, the electrodes of the grid 28 are metal pillars grown above the circuit layer 26 and may not be a part of the circuit layer 26.
- Mesh 30, while not shown in FIG. 4, is built above the electrodes of the grid 28 and provides random electrical connections between those electrodes as described in relation to FIG. 3.
- Electrodes of the grid 28 can be connected to the column driver 36 and row driver 38.
- the column and row drivers 36 and 38 include circuitry (e.g., logic gates, high and low power supply rails, etc.) to provide various voltages to the electrodes of the grid 28.
- the row and column drivers 36 and 38 can provide one or more bias voltages in the range above the RESET-voltage and below the SET-voltage to enable writing voltages and/or reading currents to or from one or more electrodes of the grid 28.
- Column and row drivers 36 and 38 can receive a training signal with respect to one or more electrodes of the grid 28.
- the column and/or row drivers 36 and 38 can provide a training voltage pulse above the SET-voltage or below the RESET-voltage to adjust the resistances at the electrode-mesh interfaces. If the training signal for one or more electrodes within the grid 28 is OFF, the column and/or row drivers 36 and 38 would not apply voltages above the SET-voltage or voltages below the RESET-voltage.
- the SVMM engine 22 operates at a virtual ground (mid-supply) and when the training signal is ON, a train of pulse voltages are sent to a transistor gate that connects one or more electrodes of the grid 28 to high power supply rail (Vdd) or to the low power supply rail (ground).
- the column or row drivers 36 and 38 can receive one or more control signals indicating in which direction (e.g., high or low) the resistances at interfaces of the electrodes of the grid 28 and mesh 30 should be moved.
- the circuit layer 26 can be designed to enable addressing each electrode of the grid 28 individually or it can be designed to address multiple electrodes of the grid 28 in parallel for efficiency purposes and to save on-chip area consumed by the circuit layer 26 and components therein.
- the SVMM engine 22 can receive digital input signals (e.g., at predetermined intervals, intermittently or at random) from a variety of sources and depending on the application in which the SVMM engine 22 is used.
- Digital input signals can include sensor input data, mathematical image parameters representing physical phenomena, artificial intelligence input data, training input data, still photographs, frames of video images, intervals of speech and any other input signal for the purposes of vector-matrix multiplication.
- One or more DACs 40 can be used to convert the digital input signals to analog voltages that can be sourced on the electrodes of the grid 28.
- One or more ADCs 42 can convert the analog output currents to digital signals, which can be outputted in the output register 34 and transmitted off-chip for further processing or other tasks.
- the SVMM engine 22 can be configured such that each electrode of the grid 28 can be an input or an output node, as opposed to devices where only the edge nodes can be input or output nodes.
- multiple electrodes of the grid 28 can be set in parallel as input electrodes and the remaining electrodes can be read as output electrodes.
- the circuit layer 26 can be designed with switches 44, which can connect an electrode of the grid 28 to a DAC 40 making that electrode available to receive an input signal.
- the switch 44 can be toggled to connect an electrode of the grid 28 to an ADC 42, making the electrode available as an output electrode.
- multiple electrodes of the grid 28 e.g.
- one or more columns of them can be used as input electrodes (by for example, appropriately positioning the switches 44 to DACs 40 or via other techniques known to persons of ordinary skill in the art).
- the remaining electrodes of the grid 28 can be used as output electrodes (e.g., the remaining columns).
- the size of input and output electrode sets can be permanent (e.g., by permanent connections in lieu of the switches 44) or be flexible (e.g., by an input/output selector signal controlling the switches 44 individually or in batches) or a combination of permanent and flexible.
- the circuit layer 26 is configured with the columns A and B used for input signals and columns C and D used for reading output signals.
- Each electrode of the grid 28 is connected to an amplifier 46 in the circuit layer 26.
- the amplifier 46 can be configured as a buffer or sample-and-hold (SAH) amplifier if its corresponding electrode is to be an input node.
- the amplifier 46 can also be configured as a transimpedance amplifier to sense a current output if its corresponding electrode is to be an output node.
- a controller 48 can configure the amplifiers 46 as input or output amplifiers, individually or in batches. Controller 48 can also coordinate other functions of the circuits in the circuit layer 26, such as controlling the switches 44 via input/output selector signal, configuring the amplifiers 46, and various functionality related to column and row drivers 36 and 38 as described above. The controller 48 can also manage the timing of various operations and components of the SVMM engine 22, such as the timing of feeding input vectors from input register 32 into the input electrodes of the grid 28, the timing of reading output currents from the output electrodes of the grid 28 and the timing of other functions and components. The controller 48 can include circuitry such as short term and/or long-term memory, storage, one or more processors, clock signal and other components to perform its function.
- one or more electrodes can be in training mode, where the resistance at the electrode-mesh interface is to be adjusted.
- the term“training” is used because in some applications, such as neural network training, the interface resistances can be adjusted based on training algorithms in neural networks (e.g., gradient descent) to construct an effective matrix of conductances.
- the individual resistances at electrode-mesh interfaces may not be known, nonetheless using training algorithms and observing input/output pairs, the resistances can be adjusted up and down until an effective matrix of conductances is formed by the resistances of the collection of electrode mesh interfaces.
- one or more electrodes can be in WRITE mode, where an appropriate amount of voltage bias is applied from the column and/or row drivers 36 and 38, and an input voltage value is sourced at the one or more electrodes via one or more corresponding DACs 40.
- one or more electrodes can be in READ mode, where an appropriate amount of voltage bias is applied from the column and/or row drivers 36 and 38, and an output current is read from the one or more electrodes via one or more corresponding ADCs 42.
- FIG. 4 An example and configuration of the SVMM engine 22 is now described in relation to FIG. 4.
- Columns A and B are assigned as input electrodes and columns C and D are assigned as output electrodes.
- the amplifiers 46 in columns A and B are configured as SAH amplifiers and the amplifiers 46 in columns C and D are configured as transimpedance amplifiers capable of sensing current.
- Switches 44 in columns A and B connect amplifiers 46 in Columns A and B to DACs 40.
- the Switches 44 in columns C and D connect the amplifiers 46 in columns C and D to the ADCs 42 of columns C and D.
- Column and row drivers 36 and 38 provide appropriate biasing voltages to the electrodes to enable the electrodes in columns A and B for WRITE mode and enable electrodes in columns C and D for READ mode.
- One or more DACs 40 convert a first vector of digital input signals received in input register 32 to analog signals and place them on the amplifiers 46 in column A.
- the amplifiers 46 in column A configured as SAH amplifiers, hold the input voltages on the electrodes in column A.
- one or more DACs 40 convert a second vector of digital input signals received in input register 32 to analog signals and place them on the amplifiers 46 in column B.
- the amplifiers 46 in column B configured as SAH amplifiers, hold the input voltages on the electrodes in column B. The process can continue if additional columns are used for input and until input columns are fed.
- the controller 48 can begin scanning and reading output currents at the ADCs 42 in columns C and D and outputting the result into output register 34.
- the SVMM engine 22 and its circuit layer 26 can be configured where individual electrodes can be used as input/output nodes.
- half of the electrodes in the grid 28 can be used as input electrodes and half of the electrodes in the grid 28 can be used as output electrodes.
- Other divisions of electrodes as inputs or outputs are also possible depending on the application.
- the controller 48 can scan through the output nodes and read the first output current (II) via an amplifier 46 configured as a current-sensing amplifier and use an ADC 42 to convert the output current II to a digital output and update the output register 34. Then the next output current 12 is read from the second output electrode via the amplifier 46 of the second output electrode, configured as a current-sensing amplifier. The process continues until the output current of the last output electrode is read and the output register 34 is updated. The nodes (or neurons if the SVMM engine 22 is used in neural networks) are traversed once.
- the SVMM engine 22 can be effectively used in many applications, where the matrix of conductances of the SVMM engine 22 can be adjusted in various directions in a manner that optimizes a desirable function, for example, an activation function in a neural network and in other contexts.
- Other computational tasks can also utilize the SVMM engine 22. Examples include, various image processing tasks, where images are matrix data representing physical phenomena, such as speech, weather, temperature, or other data structures as may exist in financial data, radar data, sensor data, still photographs, video image frames and other applications.
- the SVMM engine 22 utilizing a sparse network of conductances, offers a substantial performance advantage compared to devices using dense network of conductances.
- Sparse networks are similar to the way human brain functions. In sparse networks, not every information node is connected to all other information nodes. Only some information nodes are connected. The sparsity is believed to enable superior computational ability with economical use of area and resources compared to the more expensive networks, such as dense networks, where all information nodes are connected. Additionally, dot-product engines utilizing dense networks have proven expensive and complicated to design and operate due to the need to precisely control the resistances of the conductance matrix and difficulties in electrical or mechanical control of nanoscale material.
- the sparse matrix of conductances of SVMM engine 22 offers a higher performance to area ratio compared to devices utilizing dense networks for matrix multiplication.
- input and output nodes or neurons in the context of neural networks
- input and output nodes or neurons in dense networks exist only in lateral edges of the network. This allows the SVMM engine 22 a quadratic scaling advantage compared to devices using dense networks. As one increases the number of neurons, the SVMM engine 22 can maintain the same density of neurons, while devices using dense networks start to lose density.
- FIG. 5 illustrates a flow chart of a method 50 of sparse vector-matrix multiplication according to an embodiment.
- the method can be implemented in hardware using embodiments of FIGs. 3 and 4.
- the method 50 starts at the step 52.
- the method continues to the step 54 by providing a plurality of electrodes on a silicon substrate.
- the method then moves to the step 56 by forming a layer of randomly arranged coaxial nanowires on the plurality of electrodes.
- the method then moves to the step 58 by receiving a plurality of digital input signals.
- the method moves to the step 60 by converting the plurality of digital input signals to a plurality of analog input signals.
- the method then moves to the step 62 by writing the plurality of analog input signals on an input set of the plurality of electrodes.
- the method then moves to the step 64 by reading from an output set of the plurality of electrodes a plurality of analog output signals.
- the method then moves to the step 66 by converting the plurality of analog output signals to a plurality of digital output signals.
- the method then moves to the step 68 by outputting the plurality of digital output signals.
- the method ends at the step 70.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Neurology (AREA)
- Crystallography & Structural Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Artificial Intelligence (AREA)
- Power Engineering (AREA)
- Semiconductor Memories (AREA)
- Complex Calculations (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19781189.6A EP3776271A4 (en) | 2018-04-05 | 2019-04-05 | Systems and methods for efficient matrix multiplication |
JP2020551826A JP7130766B2 (en) | 2018-04-05 | 2019-04-05 | Systems and methods for efficient matrix multiplication |
KR1020207027083A KR102449941B1 (en) | 2018-04-05 | 2019-04-05 | Systems and Methods for Efficient Matrix Multiplication |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862653194P | 2018-04-05 | 2018-04-05 | |
US62/653,194 | 2018-04-05 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019195660A1 true WO2019195660A1 (en) | 2019-10-10 |
Family
ID=68063847
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2019/025961 WO2019195660A1 (en) | 2018-04-05 | 2019-04-05 | Systems and methods for efficient matrix multiplication |
Country Status (5)
Country | Link |
---|---|
US (3) | US10430493B1 (en) |
EP (1) | EP3776271A4 (en) |
JP (1) | JP7130766B2 (en) |
KR (1) | KR102449941B1 (en) |
WO (1) | WO2019195660A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026700A (en) * | 2019-11-21 | 2020-04-17 | 清华大学 | Memory computing architecture for realizing acceleration and acceleration method thereof |
JP7525656B2 (en) | 2020-06-25 | 2024-07-30 | レイン・ニューロモーフィックス・インコーポレーテッド | Lithographic Memristive Arrays |
Families Citing this family (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA3090431A1 (en) | 2018-02-26 | 2019-08-29 | Orpyx Medical Technologies Inc. | Resistance measurement array |
EP3776271A4 (en) * | 2018-04-05 | 2022-01-19 | Rain Neuromorphics Inc. | Systems and methods for efficient matrix multiplication |
US10440341B1 (en) * | 2018-06-07 | 2019-10-08 | Micron Technology, Inc. | Image processor formed in an array of memory cells |
KR102105936B1 (en) * | 2018-06-25 | 2020-05-28 | 포항공과대학교 산학협력단 | Weight matrix input circuit and weight matrix circuit |
US11132423B2 (en) * | 2018-10-31 | 2021-09-28 | Hewlett Packard Enterprise Development Lp | Partition matrices into sub-matrices that include nonzero elements |
US12008475B2 (en) * | 2018-11-14 | 2024-06-11 | Nvidia Corporation | Transposed sparse matrix multiply by dense matrix for neural network training |
US11184446B2 (en) | 2018-12-05 | 2021-11-23 | Micron Technology, Inc. | Methods and apparatus for incentivizing participation in fog networks |
US20200194501A1 (en) * | 2018-12-13 | 2020-06-18 | Tetramem Inc. | Implementing phase change material-based selectors in a crossbar array |
US11256778B2 (en) | 2019-02-14 | 2022-02-22 | Micron Technology, Inc. | Methods and apparatus for checking the results of characterized memory searches |
US11327551B2 (en) | 2019-02-14 | 2022-05-10 | Micron Technology, Inc. | Methods and apparatus for characterizing memory devices |
US12118056B2 (en) * | 2019-05-03 | 2024-10-15 | Micron Technology, Inc. | Methods and apparatus for performing matrix transformations within a memory array |
JP7062617B2 (en) * | 2019-06-26 | 2022-05-06 | 株式会社東芝 | Arithmetic logic unit and arithmetic method |
US10867655B1 (en) | 2019-07-08 | 2020-12-15 | Micron Technology, Inc. | Methods and apparatus for dynamically adjusting performance of partitioned memory |
US11449577B2 (en) | 2019-11-20 | 2022-09-20 | Micron Technology, Inc. | Methods and apparatus for performing video processing matrix operations within a memory array |
US11853385B2 (en) | 2019-12-05 | 2023-12-26 | Micron Technology, Inc. | Methods and apparatus for performing diversity matrix operations within a memory array |
KR20210071471A (en) * | 2019-12-06 | 2021-06-16 | 삼성전자주식회사 | Apparatus and method for performing matrix multiplication operation of neural network |
US11450712B2 (en) | 2020-02-18 | 2022-09-20 | Rain Neuromorphics Inc. | Memristive device |
CN113094791B (en) * | 2021-04-13 | 2024-02-20 | 笔天科技(广州)有限公司 | Building data analysis processing method based on matrix operation |
CN115424646A (en) * | 2022-11-07 | 2022-12-02 | 上海亿铸智能科技有限公司 | Memory and computation integrated sparse sensing sensitive amplifier and method for memristor array |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015195365A1 (en) * | 2014-06-19 | 2015-12-23 | The University Of Florida Research Foundation, Inc. | Memristive nanofiber neural netwoks |
US20170141302A1 (en) * | 2014-07-07 | 2017-05-18 | Nokia Technologies Oy | Sensing device and method of production thereof |
US20170228345A1 (en) * | 2016-02-08 | 2017-08-10 | Spero Devices, Inc. | Analog Co-Processor |
US20180046919A1 (en) * | 2016-08-12 | 2018-02-15 | Beijing Deephi Intelligence Technology Co., Ltd. | Multi-iteration compression for deep neural networks |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR0185757B1 (en) | 1994-02-14 | 1999-05-15 | 정호선 | Learning method of choas circular neural net |
JPH09185596A (en) | 1996-01-08 | 1997-07-15 | Ricoh Co Ltd | Coupling coefficient updating method in pulse density type signal processing network |
JP2003263624A (en) * | 2002-03-07 | 2003-09-19 | Matsushita Electric Ind Co Ltd | Learning arithmetic circuit for neural network device |
US7392230B2 (en) | 2002-03-12 | 2008-06-24 | Knowmtech, Llc | Physical neural network liquid state machine utilizing nanotechnology |
US7219085B2 (en) * | 2003-12-09 | 2007-05-15 | Microsoft Corporation | System and method for accelerating and optimizing the processing of machine learning techniques using a graphics processing unit |
WO2008042900A2 (en) | 2006-10-02 | 2008-04-10 | University Of Florida Research Foundation, Inc. | Pulse-based feature extraction for neural recordings |
WO2009134291A2 (en) * | 2008-01-21 | 2009-11-05 | President And Fellows Of Harvard College | Nanoscale wire-based memory devices |
US20120036919A1 (en) * | 2009-04-15 | 2012-02-16 | Kamins Theodore I | Nanowire sensor having a nanowire and electrically conductive film |
US8050078B2 (en) | 2009-10-27 | 2011-11-01 | Hewlett-Packard Development Company, L.P. | Nanowire-based memristor devices |
US8433665B2 (en) | 2010-07-07 | 2013-04-30 | Qualcomm Incorporated | Methods and systems for three-memristor synapse with STDP and dopamine signaling |
KR20140071813A (en) | 2012-12-04 | 2014-06-12 | 삼성전자주식회사 | Resistive Random Access Memory Device formed on Fiber and Manufacturing Method of the same |
US10198691B2 (en) | 2014-06-19 | 2019-02-05 | University Of Florida Research Foundation, Inc. | Memristive nanofiber neural networks |
US9934463B2 (en) * | 2015-05-15 | 2018-04-03 | Arizona Board Of Regents On Behalf Of Arizona State University | Neuromorphic computational system(s) using resistive synaptic devices |
EP3262571B1 (en) * | 2016-03-11 | 2022-03-02 | Hewlett Packard Enterprise Development LP | Hardware accelerators for calculating node values of neural networks |
US10171084B2 (en) * | 2017-04-24 | 2019-01-01 | The Regents Of The University Of Michigan | Sparse coding with Memristor networks |
US20180336470A1 (en) | 2017-05-22 | 2018-11-22 | University Of Florida Research Foundation, Inc. | Deep learning in bipartite memristive networks |
US11538989B2 (en) * | 2017-07-31 | 2022-12-27 | University Of Central Florida Research Foundation, Inc. | 3-D crossbar architecture for fast energy-efficient in-memory computing of graph transitive closure |
EP3776271A4 (en) * | 2018-04-05 | 2022-01-19 | Rain Neuromorphics Inc. | Systems and methods for efficient matrix multiplication |
-
2019
- 2019-04-05 EP EP19781189.6A patent/EP3776271A4/en active Pending
- 2019-04-05 KR KR1020207027083A patent/KR102449941B1/en active IP Right Grant
- 2019-04-05 JP JP2020551826A patent/JP7130766B2/en active Active
- 2019-04-05 WO PCT/US2019/025961 patent/WO2019195660A1/en active Application Filing
- 2019-04-05 US US16/376,169 patent/US10430493B1/en active Active
- 2019-08-16 US US16/543,426 patent/US10990651B2/en active Active
-
2021
- 2021-03-30 US US17/217,776 patent/US20210216610A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015195365A1 (en) * | 2014-06-19 | 2015-12-23 | The University Of Florida Research Foundation, Inc. | Memristive nanofiber neural netwoks |
US20170141302A1 (en) * | 2014-07-07 | 2017-05-18 | Nokia Technologies Oy | Sensing device and method of production thereof |
US20170228345A1 (en) * | 2016-02-08 | 2017-08-10 | Spero Devices, Inc. | Analog Co-Processor |
US20180046919A1 (en) * | 2016-08-12 | 2018-02-15 | Beijing Deephi Intelligence Technology Co., Ltd. | Multi-iteration compression for deep neural networks |
Non-Patent Citations (1)
Title |
---|
See also references of EP3776271A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111026700A (en) * | 2019-11-21 | 2020-04-17 | 清华大学 | Memory computing architecture for realizing acceleration and acceleration method thereof |
CN111026700B (en) * | 2019-11-21 | 2022-02-01 | 清华大学 | Memory computing architecture for realizing acceleration and acceleration method thereof |
JP7525656B2 (en) | 2020-06-25 | 2024-07-30 | レイン・ニューロモーフィックス・インコーポレーテッド | Lithographic Memristive Arrays |
Also Published As
Publication number | Publication date |
---|---|
US20190311018A1 (en) | 2019-10-10 |
US10990651B2 (en) | 2021-04-27 |
JP2021518615A (en) | 2021-08-02 |
US10430493B1 (en) | 2019-10-01 |
JP7130766B2 (en) | 2022-09-05 |
KR102449941B1 (en) | 2022-10-06 |
EP3776271A1 (en) | 2021-02-17 |
US20210216610A1 (en) | 2021-07-15 |
US20200042572A1 (en) | 2020-02-06 |
EP3776271A4 (en) | 2022-01-19 |
KR20200124705A (en) | 2020-11-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10990651B2 (en) | Systems and methods for efficient matrix multiplication | |
CN110352436B (en) | Resistance processing unit with hysteresis update for neural network training | |
CN111433792B (en) | Counter-based resistance processing unit of programmable resettable artificial neural network | |
US9466362B2 (en) | Resistive cross-point architecture for robust data representation with arbitrary precision | |
Sebastian et al. | Tutorial: Brain-inspired computing using phase-change memory devices | |
US20240170060A1 (en) | Data processing method based on memristor array and electronic apparatus | |
Yakopcic et al. | Extremely parallel memristor crossbar architecture for convolutional neural network implementation | |
US11157810B2 (en) | Resistive processing unit architecture with separate weight update and inference circuitry | |
US11650751B2 (en) | Adiabatic annealing scheme and system for edge computing | |
Musisi-Nkambwe et al. | The viability of analog-based accelerators for neuromorphic computing: a survey | |
CN108154225B (en) | Neural network chip using analog computation | |
Wei et al. | Emerging Memory-Based Chip Development for Neuromorphic Computing: Status, Challenges, and Perspectives | |
KR102514931B1 (en) | Expandable neuromorphic circuit | |
CN113994346A (en) | Scalable integrated circuit with synapse electronics and CMOS integrated memory resistors | |
AU2021216710B2 (en) | Performance and area efficient synapse memory cell structure | |
CN114143412B (en) | Image processing method and image processing apparatus | |
Le et al. | CIMulator: a comprehensive simulation platform for computing-in-memory circuit macros with low bit-width and real memory materials | |
Zhou et al. | Synchronous Unsupervised STDP Learning with Stochastic STT-MRAM Switching | |
Chen | Design of resistive synaptic devices and array architectures for neuromorphic computing | |
Tu et al. | A novel programming circuit for memristors | |
Busygin et al. | Memory Device Based on Memristor-Diode Crossbar and Control Cmos Logic for Spiking Neural Network Hardware |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19781189 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20207027083 Country of ref document: KR Kind code of ref document: A Ref document number: 2020551826 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2019781189 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2019781189 Country of ref document: EP Effective date: 20201105 |