US20220207376A1 - Matrix inversion using analog resistive crossbar array hardware - Google Patents

Matrix inversion using analog resistive crossbar array hardware Download PDF

Info

Publication number
US20220207376A1
US20220207376A1 US17/134,814 US202017134814A US2022207376A1 US 20220207376 A1 US20220207376 A1 US 20220207376A1 US 202017134814 A US202017134814 A US 202017134814A US 2022207376 A1 US2022207376 A1 US 2022207376A1
Authority
US
United States
Prior art keywords
matrix
rpu
array
cells
inverse matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US17/134,814
Inventor
Tayfun Gokmen
Oguzhan Murat Onen
Chai Wah Wu
Mark S. Squillante
Malte Johannes Rasch
Tomasz J. Nowicki
Wilfried Haensch
Lior Horesh
Vasileios Kalantzis
Vanessa Lopez-Marrero
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US17/134,814 priority Critical patent/US20220207376A1/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NOWICKI, TOMASZ J., LOPEZ-MARRERO, VANESSA, ONEN, OGUZHAN MURAT, SQUILLANTE, MARK S., GOKMEN, TAYFUN, HAENSCH, WILFRIED, HORESH, LIOR, KALANTZIS, Vasileios, RASCH, Malte Johannes, WU, CHAI WAH
Publication of US20220207376A1 publication Critical patent/US20220207376A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0445
    • G06N3/0635
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means

Definitions

  • This disclosure relates generally to analog resistive processing systems for neuromorphic computing, and techniques for performing hardware accelerated numerical computing tasks using an analog resistive processing system.
  • Information processing systems such as Neuromorphic computing systems and artificial neural network (ANN) systems are utilized in various applications such as machine learning and inference processing for cognitive recognition and computing.
  • Such systems are hardware-based systems that generally include a large number of highly interconnected processing elements (referred to as “artificial neurons”) that operate in parallel to perform various types of computations.
  • the artificial neurons e.g., pre-synaptic neurons and post-synaptic neurons
  • the synaptic weights can be implemented using an array of RPU cells having tunable resistive memory devices, the conductance states of the RPU cells are encoded or otherwise mapped to the synaptic weights.
  • Embodiments of the disclosure include computing systems, devices, and methods for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing.
  • an exemplary embodiment includes a method which comprises receiving a request from an application to compute an inverse matrix of a given matrix and performing a matrix inversion process in response to the received request.
  • the matrix inversion process comprises storing a first estimated inverse matrix of the given matrix in an array of resistive processing unit (RPU) cells, performing a first iterative process on the first estimated inverse matrix stored in the array of RPU cells to converge the first estimated inverse matrix to a second estimated inverse matrix of the given matrix, and reading the second estimated inverse matrix from the array of RPU cells upon completion of the first iterative process.
  • An inverse matrix is returned to the application, wherein the returned inverse matrix is based, at least in part, on the second estimated inverse matrix.
  • Another exemplary embodiment includes a device which comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells, and control circuitry.
  • Each RPU cell is connected at an intersection of one of the first control lines and one of the second control lines.
  • Each RPU cell comprises a resistive device with a tunable conductance, wherein conductance values of a least a portion of the resistive devices of the RPU cells in the array of RPU cells encode matrix values of an estimated inverse matrix that is stored in the array of RPU cells.
  • the estimated inverse matrix stored in the array of RPU cells represents an estimate of an inverse matrix of a given matrix.
  • the control circuitry is operatively coupled to the array of RPU cells to cause performance of a first operation and a second operation on the array of RPU cells.
  • the first operation comprises the control circuitry applying a row vector, which represents a row of the given matrix, to inputs of the first control lines to perform a vector matrix operation which comprises multiplying the input row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector on outputs of the second control lines.
  • the second operation comprises the control circuitry applying an error vector to inputs of the second control lines and inputting the row vector to the inputs of the first control lines, to perform an outer product operation of the error vector and the row vector on the array of RPU calls to thereby update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
  • Another exemplary embodiment includes a computing system which comprises a digital processing system and a neuromorphic computing system coupled to the digital processing system.
  • the digital processing system comprises one or more processors and memory to store program instructions that are executed by the one or more processors to perform a matrix inversion process to compute an inverse matrix of a given matrix.
  • the neuromorphic computing system comprises at least one neural core.
  • the at least one neural core comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, second control lines extending in a second direction across the array of RPU cells, and peripheral circuitry coupled to the first control lines and to the second control lines.
  • RPU resistive processing unit
  • Each RPU cell is connected at an intersection of one of the first control lines and one of the second control lines, and each RPU cell comprises a resistive device with a tunable conductance, wherein conductance values of a least a portion of the resistive devices of the RPU cells in the array of RPU cells encode matrix values of an estimated inverse matrix that is stored in the array of RPU cells.
  • the estimated inverse matrix stored in the array of RPU cells represents an estimate of the inverse matrix of the given matrix.
  • the digital processing system controls the peripheral circuitry to perform first and second operations of the matrix inversion process on the array of RPU cells.
  • the first operation comprises the digital processing system applying a row vector, which represents a row of the given matrix, to inputs of the first control lines to perform a vector matrix operation which comprises multiplying the row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector on outputs of the second control lines.
  • the second operation comprises the digital processing system applying an error vector to inputs of the second control lines and inputting the row vector to the inputs of the first control lines, to perform an outer product operation of the error vector and the row vector on the array of RPU calls to thereby update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
  • FIG. 1 schematically illustrates a computing system for performing matrix inversion process using an analog resistive processing unit array for hardware accelerated computing, according to an exemplary embodiment of the disclosure.
  • FIG. 2 schematically illustrates a computing system which comprises an array of resistive processing unit cells that can be used to perform a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • FIG. 3 illustrates a method for performing a matrix inversion process according to an exemplary embodiment of the disclosure.
  • FIG. 4A schematically illustrates a method for performing an analog vector-matrix multiplication operation of a matrix inversion process that is performed on an array of resistive processing unit cells of a computing system, according to an exemplary embodiment of the disclosure.
  • FIG. 4B schematically illustrates a method for performing an analog vector-vector multiplication operation of a matrix inversion process that is performed on an array of resistive processing unit cells of a computing system to update an approximate inverse matrix, according to an exemplary embodiment of the disclosure.
  • FIG. 5A schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • FIG. 5B schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform an analog vector-vector multiplication operation of a matrix inversion process to update matrix values of an approximate inverse matrix stored in the array of resistive processing unit cells, according to an exemplary embodiment of the disclosure.
  • FIG. 6 schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process using signed matrix values, according to exemplary embodiments of the disclosure.
  • FIG. 7 schematically illustrates an exemplary architecture of a computing node which can host and execute a matrix inversion process according to an exemplary embodiment of the disclosure.
  • FIG. 8 depicts a cloud computing environment according to an exemplary embodiment of the disclosure.
  • FIG. 9 depicts abstraction model layers according to an exemplary embodiment of the disclosure.
  • FIG. 1 schematically illustrates a computing system for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing, according to an exemplary embodiment of the disclosure.
  • FIG. 1 schematically illustrates a computing system 100 comprising an application 110 , a digital processing system 120 , and a neuromorphic computing system 130 .
  • the digital processing system 120 comprises a plurality of processor cores 122 .
  • the neuromorphic computing system 130 comprises a plurality of neural cores 132 .
  • Each neural core 132 comprises one or more analog resistive processing unit arrays (e.g., analog RPU crossbar array hardware).
  • the neural cores 132 are configured to support hardware acceleration for computing an inverse matrix by performing multiply-accumulate (MAC) operations in the analog domain to support hardware acceleration of numerical operations such as vector-matrix multiplication, matrix-vector multiplication, vector-vector multiplication, and/or matrix multiplication operations, which are performed on the RPU arrays 134 .
  • MAC multiply-accumulate
  • the digital processing system 120 controls the execution of a matrix inversion process 140 which is performed to compute an inverse matrix for a given matrix A provided by the application 110 .
  • the matrix inversion process 140 comprises optimization solver methods, including a stochastic gradient descent (SGD) process 142 and a Newton optimization process 144 , which are utilized during the matrix inversion process 140 to compute an inverse matrix.
  • SGD process 142 and the Newton optimization process 144 are software modules that are executed by the processor cores 122 of the digital processing system 120 to perform the matrix inversion process 140 .
  • the matrix inversion process 140 utilizes the neuromorphic computing system 130 for hardware acceleration to compute an inverse matrix by performing multiply-accumulate (MAC) operations in the analog domain to perform various im-memory computations, such as vector-matrix and vector-vector operations, on an approximate (estimated) inverse matrix A est ⁇ 1 that is stored in one or more of the RPU arrays 134 , as schematically illustrated in FIG. 1 .
  • MAC multiply-accumulate
  • the application 110 may comprise any type of computing application (e.g., scientific computing applications, engineering applications, graphics rendering applications, signal processing applications, MIMO (Multiple-Input, Multiple-Output) system for wireless communications, cryptography, etc.) which utilizes matrices and inverse matrices as computational objects to perform numerical operations, solve linear equations, and perform other computations.
  • the application 110 may invoke a matrix inversion process 140 to compute an inverse matrix A ⁇ 1 of a given matrix A which is input to the matrix inversion process 140 .
  • the matrix A comprises a symmetric positive definite (SN)) matrix.
  • the matrix A comprises a symmetric positive definite (SPD) matrix.
  • SPD matrix is a square symmetric matrix which has positive eigenvalues. SPD matrices arise in many physical and mathematical contexts for which inverse matrices of the SPD matrices are needed to perform computations.
  • the digital processing system 120 controls the execution of matrix inversion process 140 .
  • the matrix inversion process 140 will configure one or more cores 132 and associated RPU arrays 134 to provide hardware acceleration support for the matrix inversion process.
  • the matrix inversion process 140 computes an initial approximate inverse matrix A est ⁇ 1 (e.g., a random inverse matrix) and stores the initial approximate inverse matrix A est ⁇ 1 one or more of the RPU arrays 134 of the one or more neural cores 132 which are configured to support the matrix inversion process 140 .
  • the SGD process 142 performs an iterative optimization process which utilizes numerical values in row vectors of the given matrix A as training data to train the approximate inverse matrix A est ⁇ 1 and perform an iterative update process in the analog domain (e.g., vector-matrix operations, vector-vector operations, etc. performed on the approximate inverse matrix A est ⁇ 1 in the RPU arrays 134 ) to update the values of the approximate inverse matrix A est ⁇ 1 based on errors that are computed against row vectors of an identity matrix I.
  • the iterative SGD process 142 converges the approximate inverse matrix A est ⁇ 1 to a close approximation of the actual inverse matrix A ⁇ 1 .
  • the matrix inversion process 140 Upon completion of the SGD process 142 , the matrix inversion process 140 reads out the approximate inverse matrix A est ⁇ 1 from the RPU arrays 134 of the neuromorphic computing system 130 and performs an optional post-processing optimization in the digital domain using, e.g., the Newton optimization process 144 to further enhance the accuracy of the approximate inverse matrix A est ⁇ 1 . Upon completion of the optimization process (e.g., the analog optimization and the optional digital optimization), the matrix inversion process 140 returns the optimized approximate inverse matrix A est ⁇ 1 as a final inverse matrix A final ⁇ 1 . Details of exemplary embodiment of the optimization methods 142 and 144 will be discussed in further detail below with reference to, e.g., FIG. 3 .
  • FIG. 2 schematically illustrates a computing system which comprises an array of resistive processing unit cells that can be used to perform a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • FIG. 2 schematically illustrates an exemplary embodiment of a neural core 132 and associated RPU array(s) 134 of the neuromorphic computing system 130 of FIG. 1 .
  • FIG. 2 schematically illustrates a computing system 200 (e.g., neuromorphic computing system) which comprises a two-dimensional (2D) crossbar array of RPU cells 210 arranged in a plurality of rows R 1 , R 2 , R 3 , . . . , Rm, and a plurality of columns C 1 , C 2 , C 3 , Cn.
  • 2D two-dimensional
  • the RPU cells 210 in each row R 1 , R 2 , R 3 , . . . , Rm are commonly connected to respective row control lines RL 1 , RL 2 , RL 3 , . . . , RLm (collectively, row control lines RL).
  • the RPU cells 210 in each column C 1 , C 2 , C 3 , Cn are commonly connected to respective column control lines CL 1 , CL 2 , CL 3 , . . . , CLn (collectively, column control lines CL).
  • Each RPU cell 210 is connected at (and between) a cross-point (or intersection) of a respective one of the row and column control lines.
  • the computing system 200 comprises a 4,096 ⁇ 4,096 array of RPU cells 210 .
  • the computing system 200 further comprises peripheral circuitry 220 connected to the row control lines RL 1 , RL 2 , RL 3 , . . . , RLm, as well peripheral circuitry 230 connected to the column control lines CL 1 , CL 2 , CL 3 , . . . , CLn. Further, the peripheral circuitry 220 is connected to a data input/output (I/O) interface block 225 , and the peripheral circuitry 230 is connected to a data I/O interface block 235 .
  • the computing system 200 further comprises control signal circuitry 240 which comprises various types of circuit blocks such as power, clock, bias and timing circuitry to provide power distribution and control signals and clocking signals for operation of the peripheral circuitry 220 and 230 of the computing system 200 .
  • each RPU cell 210 in the computing system 200 comprises a resistive element with a conductance value that is tunable.
  • some or all of the RPU cells 210 in the computing system 200 comprise respective conductance values that are mapped to respective numerical matrix values of an approximate inverse matrix A est ⁇ 1 that is stored in the array of RPU cells 210 .
  • the resistive elements of the RPU cells 210 are implemented using resistive devices such as resistive switching devices (interfacial or filamentary switching devices), ReRAM, memristor devices, phase change memory (PCM) device, etc.) and other types of devices which have a tunable conductance (or tunable resistance level) which can be programmatically adjusted within a range of a plurality of different conductance levels to tune the weight of the RPU cell 210 .
  • the variable conductance elements of the RPU cells 210 can be implemented using ferroelectric devices such as ferroelectric field-effect transistor devices.
  • the RPU cells 210 can be implemented using an analog CMOS-based framework in which each RPU cell 210 comprises a capacitor and a read transistor.
  • the capacitor serves as a memory element of the RPU cell 210 and stores a weight value in the form a capacitor voltage, wherein the capacitor voltage is applied to a gate terminal of the read transistor to modulate a channel resistance of the read transistor based on the level of the capacitor voltage, and wherein the channel resistance of the read transistor represents the conductance of the RPU cell and is correlated to a level of a read current that is generated based on the channel resistance.
  • each row control line RL and column control lines CL are each shown in FIG. 2 as a single line for ease of illustration, it is to be understood that each row and column control line can include two or more control lines connected to the RPU cells 210 in the respective rows and columns, depending on the implementation and the specific architecture of the RPU cells 210 .
  • each row control line RL can include a complementary pair of word lines for a given RPU cell 210 .
  • each column control line CL may comprise multiple control lines including, e.g., one or more source lines (SL) and one or more bit lines (BL).
  • the peripheral circuitry 220 and 230 comprises various circuit blocks which are connected to the respective rows and columns in the 2D array of RPU cells 210 , and which are configured to perform vector-matrix multiply functions, matrix-vector multiply functions, and outer product update operations to implement operations needed to perform a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • the peripheral circuitry 220 and 230 comprises pulse-width modulation (PWM) circuitry and read pulse driver circuitry to generate and apply PWM read pulses to the RPU cells 210 , in response to input vector values (read input values) received during different operations.
  • PWM pulse-width modulation
  • the peripheral circuitry 220 and 230 comprises digital-to-analog (D/A) converter circuitry that is configured to receive digital input vectors (to be applied to rows or columns) and convert the digital input vector into analog input vector values that are represented by input voltage voltages of varying pulse width.
  • D/A digital-to-analog
  • the input voltages applied to rows (or columns) generate output vector values which are represented by output currents, wherein the weights of the RPU cells 210 are read out by measuring the output currents.
  • the peripheral circuitry 220 and 230 further comprises current integrator circuitry and analog-to-digital (A/D) converter circuitry to integrate read currents (I READ ) which are output and accumulated from the connected RPU cells 210 and convert the integrated currents into digital values (read output values) for subsequent computation.
  • I READ read currents
  • the currents generated by the RPU cells 210 are summed on the columns (or rows) and this total current is integrated over a measurement time, tmeas, by current readout circuitry of the peripheral circuitry 220 and 230 .
  • the current readout circuitry comprises current integrators and analog-to-digital (A/D) converters.
  • each current integrator comprises an operational amplifier that integrates the current output from a given column (or row) (or differential currents from pairs of RPU cells implementing negative and positive weights) on a capacitor, and an analog-to-digital (A/D) converter converts the integrated current (e.g., an analog value) to a digital value.
  • A/D analog-to-digital
  • the data I/O interfaces 225 and 235 are configured to interface with a digital processing core, wherein the digital processing core is configured to process input/outputs to the computing system 200 (neural core) and route data between different RPU arrays.
  • the data I/O interfaces 225 and 235 are configured to (i) receive external control signals and data from a digital processing core and provide the received control signals and data to the peripheral circuitry 220 and 230 , and (ii) receive digital read output values from peripheral circuitry 220 and 230 , and send the digital read output values to a digital processing core for processing.
  • FIG. 3 illustrates a method for performing a matrix inversion process according to an exemplary embodiment of the disclosure.
  • FIG. 3 illustrates a process flow which is implemented by the computing system 100 of FIG. 1 .
  • the application may need to perform a computation that requires an inverse matrix A ⁇ 1 of a given matrix A.
  • the computing system will receive a request from the given application to determine the inverse matrix A ⁇ 1 of the given matrix A (block 300 ).
  • the request will include the values of matrix A.
  • the matrix A comprises a square matrix, e.g., n ⁇ n matrix, with n rows and n columns, where n can be relatively large (e.g., 100 or greater).
  • the matrix A comprises an SPD matrix.
  • the computing system invokes a matrix inversion process (e.g., process 140 , FIG. 1 ) to compute the inverse matrix A ⁇ 1 of the input matrix A.
  • a matrix inversion process e.g., process 140 , FIG. 1
  • the invocation of the matrix inversion process comprises an initial process of configuring the neuromorphic computing system 130 to provide accelerator hardware support for performing the matrix inversion process (block 301 ).
  • the digital signal processing system 120 communicates with a programming interface of the neuromorphic computing system 130 to configure one or more neurons and a routing system of the neuromorphic computing system 130 to allocate and configure one or more neural cores to implement one or more interconnected RPU arrays for storing matrix values of an approximate (estimated) inverse matrix A est ⁇ 1 and performing in-memory computations on the approximate inverse matrix A est ⁇ 1 (alternative referred to as approximate inverse matrix P).
  • the number of RPU arrays that are allocated and interconnected will vary depending on the size of the matrix A and the size of the RPU arrays. For example, if the RPU arrays have a size of 4096 ⁇ 4096, then one RPU array can be configured to store the values of the approximate inverse matrix A est ⁇ 1 and perform a matrix inversion process for an n ⁇ n matrix A, where n is 4096 or less. In some embodiments, when the approximate inverse matrix A est ⁇ 1 matrix is smaller than the physical RPU on which the approximate inverse matrix A est ⁇ 1 matrix is stored, any unused RPU cells can be set to zero and/or unused inputs to the RPU array can be padded by “zero” voltages.
  • n ⁇ n matrix A when the size of the n ⁇ n matrix A is greater than the size of a single RPU array, then multiple RPU arrays can be operatively interconnected to form an RPU array which is large enough to store the values of the approximate inverse matrix A est ⁇ 1 of the n ⁇ n matrix A.
  • the matrix inversion process determines an initial approximation of the inverse matrix A est ⁇ 1 (e.g., a first estimated inverse matrix) and stores the initial approximate of the inverse matrix A est ⁇ 1 on the allocated RPU array(s) of the configured neural core(s) (block 302 ).
  • the initial approximate inverse matrix A est ⁇ 1 comprises a random matrix that is determined using a predetermined function.
  • the initial approximated inverse matrix A est ⁇ 1 comprises a random matrix which comprises a set of uniformly distributed random numbers with a mean of zero and a variance of 1.
  • the initial values of the approximate inverse matrix A est ⁇ 1 range from 0 to 1.
  • Other techniques can be implemented for computing a random matrix based on, e.g., the given application, the size of the matrix A, etc.
  • a next phase of the exemplary matrix inversion process comprises performing an iteration of an SGD process (e.g., blocks 304 , 305 , 306 , 307 ) to train the approximate inverse matrix [A est ⁇ 1 ] using the rows of the matrix A as training data, and performing an iterative update process to update the values of the approximate inverse matrix A est ⁇ 1 based on errors that are computed against row vectors of an identity matrix I.
  • an SGD process e.g., blocks 304 , 305 , 306 , 307
  • an iterative update process to update the values of the approximate inverse matrix A est ⁇ 1 based on errors that are computed against row vectors of an identity matrix I.
  • A [ a 1 ⁇ 1 a 1 ⁇ 2 a 1 ⁇ 3 a 2 ⁇ 1 a 2 ⁇ 2 a 2 ⁇ 3 a 31 a 3 ⁇ 2 a 3 ⁇ 3 ] ,
  • the Identity matrix I comprises a main diagonal with all values on the main diagonal equal to 1, and all other values equal to zero.
  • P the approximate inverse matrix A est ⁇ 1 is denoted P where:
  • an analog vector-matrix multiplication is performed on the RPU array(s) by multiplying the i th row vector of matrix A by the approximate inverse matrix P (block 303 ).
  • Y 1 [1, 0, . . . , 0]
  • an iteration of an update process is performed to update numerical values of the approximate inverse matrix P.
  • the approximate inverse matrix P is updated by performing an analog outer product of the error vector ⁇ i and a transpose A i T of the current row vector A i (block 305 ). More specifically, in some embodiments, an update process is performed as follows:
  • P′ denotes an update to the current approximate inverse matrix P
  • ⁇ i [ ⁇ 1 , ⁇ 2 , . . . , ⁇ n ] denotes the error vector computed as (A i ⁇ P) ⁇ I i
  • denotes a learning rate (e.g., ranging from 0.01 to 0.001).
  • the learning rate ⁇ is scale factor that is applied to the values of the transposed row vector A i T .
  • the transposed row vector A i T represents the i th column of a transpose A T of the matrix A.
  • each “epoch” results in n updates to the approximate inverse matrix P as follows:
  • the SGD process is implemented to train the approximate inverse matrix P for a predetermined number of epochs in which it is assumed that the approximate inverse matrix P converges to a sufficient approximation of the actual inverse matrix A ⁇ 1 with an analog accuracy (e.g., error sum of squares (SSE)) that is acceptable for the given application, or which otherwise provides a sufficiently accurate approximation of the actual inverse matrix A ⁇ 1 to be utilized as a seed in a digital optimization process that may be applied in the digital domain to obtain an approximation of the actual inverse matrix A ⁇ 1 with even greater accuracy.
  • an analog accuracy e.g., error sum of squares (SSE)
  • a predefined number of epochs can be, e.g., 100 epochs. However, the predefined number of epochs will vary depending on the application and the degree of accuracy needed from the analog SGD training process. In some embodiments, the predefined number of epochs that is utilized for a given application can be determined based historical empirical information from which one may determine a number of epochs that may result in an approximate inverse matrix A ⁇ 1 with a target accuracy.
  • a next epoch will be performed starting with inputting the first row vector A 1 of the matrix A to the RPU array(s) and performing the analog vector-matrix multiplication operation (return to block 303 ).
  • the approximate inverse matrix P e.g., second estimated inverse matrix
  • the approximate inverse matrix P which is read out from the RPU array(s) (e.g., the second estimated inverse matrix) will be returned to the requesting application as the final inverse matrix A final ⁇ 1 .
  • the approximate inverse matrix P that is computed in the analog domain using the SGD process may be deemed sufficiently accurate for the given application.
  • the approximate inverse matrix P (e.g., second estimated inverse matrix) which is read out from the RPU array(s) will be further processed in the digital domain by performing an iterative digital optimization process to converge the approximate inverse matrix P to a final inverse matrix A final ⁇ 1 (e.g., third estimated inverse matrix) which is more accurate than the approximate inverse matrix P (e.g., second estimated inverse matrix) which is read out from the RPU array(s) (block 310 ).
  • digital post-processing is performed using Newton's optimization process.
  • an error checking process is performed which comprises (i) computing an error based on (P i+1 ⁇ A) ⁇ I, and (ii) comparing the computed error to an error threshold value.
  • Each iteration of the digital optimization should result in the matrix multiplication (P i+1 ⁇ A) becoming increasingly closer to the identity matrix I. If the computed error (following a given iteration) is less than or equal to the error threshold value, the digital optimization process can terminate, and the last computed approximate inverse matrix P i+1 is returned as the final inverse matrix A final ⁇ 1 to the requesting application (block 311 ).
  • the error threshold value is based on, e.g., a target bit precision (e.g., 64-bit precision, 128-bit precision, etc.) which is desired for the given application.
  • FIGS. 4A, 4B, 5A and 5B schematically illustrate the vector-matrix multiplication and outer product (update) computations that are performed on an array of RPU cells to implement the process flow of blocks 303 , 304 , and 305 ( FIG. 3 ).
  • FIG. 4A schematically illustrates a vector-matrix multiplication that is performed on an array of RPU cells 210 of the computing system 200 of FIG. 2 .
  • the conductance values of the RPU cells 210 are mapped to respective matrix elements 212 of an approximate inverse matrix P that is stored in the array of RPU cells 210 .
  • Each RPU cell 210 performs a local multiplication operation 214 and local summation operation 216 to perform a vector matrix multiplication operation (e.g., block 303 , FIG. 3 ).
  • x denotes a 1 ⁇ n row vector of the matrix A
  • P denotes the approximate inverse of the matrix A
  • FIG. 4A shows two elements (x i , x i+1 ) of the input row vector x and the corresponding two elements (y i , y i+1 ) of the output vector y which are generated as a result of the vector-matrix multiplication.
  • FIG. 4B illustrates a vector-vector multiplication (outer product) that is performed on the array of RPU cells 210 of the computing system 200 of FIG. 2 to the matrix values 210 of the approximate inverse matrix P that is stored in the array of RPU cells 210 .
  • the vector-vector outer product comprises multiplication operation and an incremental weight update to be performed locally at each cross-point (RPU cell 210 ) in the computing system 200 of FIG. 2 .
  • the update process is computed as: p ij ⁇ p ij + ⁇ x i ⁇ j , where p ij represents the matrix value of P stored in the RPU cell 210 at the cross point of the i th row and the j th column (for simplicity layer index is omitted).
  • voltage pulses representing vectors x and ⁇ are simultaneously applied to the rows and the columns, wherein each RPU cell 210 performs a local multiplication and summation operation by processing the voltage pulses coming from the column and the row and hence achieving an incremental weight update.
  • stochastic translator circuitry in the peripheral circuitry 220 and 230 is utilized to generate stochastic bit streams that represent the input vectors xi and ⁇ j.
  • the stochastic bits streams for the vectors xi and ⁇ j are applied to the rows and columns in the 2D crossbar array of RPU cells 210 , wherein the conductance value of a given RPU cell 210 (and hence, the corresponding matrix value) will change depending on the coincidence of the xi and ⁇ j stochastic pulse streams input to the given RPU cell 210 .
  • the vector cross product operations for the update operation are implemented based on the known concept that coincidence detection (using an AND logic gate operation) of stochastic streams representing real numbers is equivalent to a multiplication operation.
  • FIG. 5A schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • FIG. 5A schematically illustrates a computing system 500 which comprises a crossbar array of RPU cells 505 , wherein each RPU cell 510 in the array 505 comprises an analog non-volatile resistive element (represented as a variable resistor having a tunable conductance G) at the intersection of each row (R 1 , R 2 , . . . , Rn) and column (C 1 , C 2 , . . . , Cn).
  • R 1 , R 2 . . . , Rn
  • the array of RPU cells 505 provides a matrix of conductance values Gij which are mapped to matrix values of an approximate inverse matrix P, which are encoded by the conductance values Gij (where i represents a row index and j denotes a column index) of the respective RPU cells 510 .
  • multiplexers in the peripheral circuitry of the computing system 500 are activated to selectively connect row line driver circuitry 520 to the row lines R 1 , R 2 , . . . , Rm.
  • the row line driver circuitry 520 comprises plurality of digital-to-analog (DAC) circuit blocks 522 - 1 , 522 - 2 , . . . , 522 - n (collectively DAC circuit blocks 522 ) which are connected to respective row lines R 1 , R 2 , . . . , Rn.
  • DAC digital-to-analog
  • multiplexers in the peripheral circuitry of the computing system 500 are activated to selectively connect readout circuitry 530 to the column lines C 1 , C 2 , . . . , Cn.
  • the readout circuitry 530 comprises a plurality of readout circuit blocks 530 - 1 , 530 - 2 , . . . , 530 - n , which are connected to respective column lines C 1 , C 2 , . . . , Cn.
  • the readout circuit blocks 530 - 1 , 530 - 2 , . . . , 530 - n comprise respective current integrator circuitry 532 - 1 , 532 - 2 , . . .
  • the current integrator circuitry comprises current integrator circuit blocks, wherein each current integrator comprises an operational transconductance amplifier (OTA) with negative capacitive feedback to convert an input current (aggregate column current) to an output voltage on output node of the current integrator circuit, wherein at the end of an integration period, each ADC circuit latches in the output voltage generated at an output node of a respective current integrator circuit quantizes the output voltage to generate a digital output signal.
  • OTA operational transconductance amplifier
  • the digital signals x 1 , x 2 , . . . , x n are input to respective DAC circuit blocks 522 - 1 , 522 - 2 , . . .
  • the DAC circuit blocks 522 - 1 , 522 - 2 , . . . , 522 - n each comprise a pulse-width modulation circuitry and driver circuitry which is configured to generate pulse-width modulated (PWM) read pulses V 1 , V 2 , . . . , V n that are applied to the respective row lines R 1 , R 2 , . . . , Rn.
  • PWM pulse-width modulated
  • V fixed amplitude pulses
  • a tunable duration wherein the pulse duration is a multiple of a prespecified time period (e.g., 1 nanosecond) and is proportional to the value of the input vector.
  • a given digital input value of 0.5 can be represented by a voltage pulse of 4 ns
  • a digital input value of 1 can be represented by a voltage pulse of 80 ns
  • a digital input value of 1 can be encoded to an analog voltage pulse with a pulse duration that is equal to the integration time T meas
  • the resulting analog input voltages V 1 , V 2 , . . . , V n are applied to the row lines R 1 , R 2 , . . . , Rn.
  • I READ V i ⁇ G ij
  • the read currents that are generated by the RPU cells 510 on each column j are summed together (based on Kirchhoff s current law) to generate respective currents I 1 , I 2 , . . . , I n at the output of the respective columns C 1 , C 2 , . . . , Cn.
  • the resulting column currents I 1 , I 2 , . . . , I n represent the result of a vector-matrix multiplication operation that is performed (to multiply the given row vector of the matrix A with the approximate inverse matrix P), wherein the input analog voltage vector [V 1 , V 2 , . . .
  • V n is multiplied by the conductance matrix G (of conductance values Gij) to generate and output an analog current vector [I 1 , I 2 , . . . , I n ], as illustrated in FIG. 5A .
  • the resulting aggregate read currents I 1 , I 2 , . . . , I n at the output of the respective columns C 1 , C 2 , . . . , Cn are input to respective readout circuit blocks 530 - 1 , 530 - 2 , 530 - n of the readout circuitry 530 .
  • the aggregate read currents I 1 , I 2 , . . . , I n are integrated by the respective current integrator circuits 532 - 1 , 532 - 2 , . . . , 532 - n to generate respective output voltages, which are quantized by the respective ADC circuits 534 - 1 , 534 - 2 , . . .
  • the digital output signals y 1 , y 2 , . . . , y n are transmitted to the digital processing system and processed to determine the errors.
  • the error is determined as a difference between the output values y 1 , y 2 , . . . , y n and target values for a corresponding row of the identity matrix I.
  • the digital processing system generates a digital error vector ⁇ which is then applied to the computing system 500 to perform an update process, as schematically illustrated in FIG. 5B
  • FIG. 5B schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform an inverse matrix update process for a matrix inversion process, according to an exemplary embodiment of the disclosure. More specifically, FIG. 5B schematically illustrates a method for configuring the computing system 500 to perform a vector-vector outer product computation (e.g., block 305 ( FIG. 3 ), FIG. 4B ) to incrementally update matrix values of the approximate inverse matrix P stored in the RPU array 505 .
  • FIG. 5B schematically illustrates a configuration of the computing system 500 in which multiplexers in the peripheral circuitry of the computing system 500 are activated to selectively connect column line driver circuitry 540 to the column lines C 1 , C 2 , . .
  • the column line driver circuitry 540 comprises a plurality of DAC circuit blocks 542 - 1 , 542 - 2 , . . . , 542 - n (collectively DAC circuit blocks 542 ) which are connected to respective column lines C 1 , C 2 , . . . , Cn.
  • the DAC circuit blocks 522 - 1 , 522 - 2 , . . . , 522 - n are connected to the respective row lines R 1 , R 2 , . . . , Rn.
  • the DAC circuit blocks 542 perform the same function as the DAC circuit blocks 522 as described above.
  • the DAC circuit blocks 542 - 1 , 542 - 2 , 542 - n generate analog voltages V 1 , V 2 , . . . , V n using the same or similar time-encoding techniques as discussed above to generate pulse-modulated voltage pulses (with a same amplitude but tunable pulse width) which correspond to the values of the respective digital error signals ⁇ 1 , ⁇ 2 , . . . , ⁇ n .
  • the 1 ⁇ n row vector of the matrix A applied to the rows for the vector-matrix application are again input to but scaled with the learning parameter ⁇ .
  • digital signals ⁇ x 1 , ⁇ x 2 , . . . , ⁇ x n are input to respective DAC circuit blocks 522 - 1 , 522 - 2 , . . . , 522 - n .
  • FIG. 5A schematically illustrates an exemplary method for generating the aggregate column currents for a vector-matrix multiplication operation performed in the analog domain
  • FIG. 6 schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process using signed matrix values, according to exemplary embodiments of the disclosure.
  • FIG. 6 schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process using signed matrix values, according to exemplary embodiments of the disclosure.
  • FIG. 6 schematically illustrates a method for generating an aggregate column current I COL1 using different column currents I 1 + and I 1 ⁇ from corresponding columns C 1 + and C 1 ⁇ of two separate RPU arrays 510 and 610 , wherein the conductance is determined as (G + ⁇ G ⁇ ).
  • the magnitude of I COL1 will correspond to a given matrix value, wherein the sign of the matrix value will depend on the whether I 1 is greater than, equal to, or less than, I 1 ⁇ .
  • a positive sign (I COL1 >0) will be obtained when I 1 >I 1 ⁇ .
  • a negative sign (I COL1 ⁇ 0) will be obtained when I 1 ⁇ I 1 ⁇ .
  • G ij G ij + ⁇ G ij ⁇ where i and j are indices within the RPU array 505 .
  • a pair of identical RPU device arrays to encode positive (G ij + ) and negative (G ij ⁇ ) matrix values, wherein the matrix value (G ij ) of a given RPU cell is proportional to a difference of two conductance values stored in two corresponding devices (G ij + ⁇ G ij ⁇ ) located in identical positions of the pair of RPU arrays 510 and 610 (wherein the two RPU arrays 510 and 610 can be stacked on top of each other in a back-end-of-line metallization structure of a chip).
  • a single RPU tile is deemed a pair of RPU arrays with the peripheral circuits that support the parallel operation of the array in all three cycles.
  • positive voltage pulses V 1 , V 2 , . . . , V n
  • corresponding negative voltage pulses ⁇ V 1 , ⁇ V 2 , . . . , ⁇ V n
  • the aggregate column currents I 1 + and I 1 ⁇ that are output from the corresponding first columns C 1 + and C 1 ⁇ in the respective RPU arrays 510 and 610 are combined to generate a differential aggregate current I COL1 which is input to the readout circuit block 530 - 1 connected to the corresponding first columns C 1 + and C 1 ⁇ .
  • the RPU framework of FIG. 6 can be implemented to store real and imaginary matrix values in two distinct RPU arrays.
  • the first RPU array 510 can be configured to store the real matrix values of a complex matrix
  • the second RPU array is configured to store the imaginary matrix values of the complect matrix. IN this manner, the respective parts can then be processed separately, making it possible to obtain a conjugate transpose A* and a pseudoinverse A ⁇ .
  • Exemplary embodiments of the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration
  • the computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention
  • the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
  • the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • a non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disk
  • memory stick a floppy disk
  • a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon
  • a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
  • the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
  • a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages.
  • the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the blocks may occur out of the order noted in the Figures.
  • two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • FIG. 7 schematically illustrates an exemplary architecture of a computing node which can host and execute a matrix inversion process according to an exemplary embodiment of the disclosure.
  • FIG. 7 illustrates a computing node 700 which comprises a computer system/server 712 , which is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 712 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
  • Computer system/server 712 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system.
  • program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types.
  • Computer system/server 712 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer system storage media including memory storage devices.
  • computer system/server 712 in computing node 700 is shown in the form of a general-purpose computing device.
  • the components of computer system/server 712 may include, but are not limited to, one or more processors or processing units 716 , a system memory 728 , and a bus 718 that couples various system components including system memory 728 to the processors 716 .
  • the bus 718 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
  • the computer system/server 712 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 712 , and it includes both volatile and non-volatile media, removable and non-removable media.
  • the system memory 728 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 730 and/or cache memory 732 .
  • the computer system/server 712 may further include other removable/non-removable, volatile/nonvolatile computer system storage media.
  • storage system 734 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”).
  • a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”)
  • an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media
  • each can be connected to bus 718 by one or more data media interfaces.
  • memory 728 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
  • the program/utility 740 having a set (at least one) of program modules 742 , may be stored in memory 728 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment.
  • Program modules 742 generally carry out the functions and/or methodologies of embodiments of the disclosure as described herein.
  • Computer system/server 712 may also communicate with one or more external devices 714 such as a keyboard, a pointing device, a display 724 , etc., one or more devices that enable a user to interact with computer system/server 712 , and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 712 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 722 . Still yet, computer system/server 712 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 720 .
  • LAN local area network
  • WAN wide area network
  • public network e.g., the Internet
  • network adapter 720 communicates with the other components of computer system/server 712 via bus 718 .
  • bus 718 It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 712 . Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, SSD drives, and data archival storage systems, etc.
  • Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service.
  • This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
  • On-demand self-service a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
  • Resource pooling the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
  • Rapid elasticity capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  • Measured service cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
  • level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts).
  • SaaS Software as a Service: the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure.
  • the applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail).
  • a web browser e.g., web-based e-mail
  • the consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  • PaaS Platform as a Service
  • the consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
  • IaaS Infrastructure as a Service
  • the consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
  • Private cloud the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.
  • Public cloud the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
  • Hybrid cloud the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).
  • a cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability.
  • An infrastructure that includes a network of interconnected nodes.
  • cloud computing environment 50 includes one or more cloud computing nodes 10 with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone 54 A, desktop computer 54 B, laptop computer 54 C, and/or automobile computer system 54 N may communicate.
  • Nodes 10 may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof.
  • This allows cloud computing environment 50 to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device.
  • computing devices 54 A-N shown in FIG. 8 are intended to be illustrative only and that computing nodes 10 and cloud computing environment 50 can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser).
  • FIG. 9 a set of functional abstraction layers provided by cloud computing environment 50 ( FIG. 8 ) is shown. It should be understood in advance that the components, layers, and functions shown in FIG. 9 are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided:
  • Hardware and software layer 60 includes hardware and software components.
  • hardware components include: mainframes 61 ; RISC (Reduced Instruction Set Computer) architecture based servers 62 ; servers 63 ; blade servers 64 ; storage devices 65 ; and networks and networking components 66 .
  • software components include network application server software 67 and database software 68 .
  • Virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 71 ; virtual storage 72 ; virtual networks 73 , including virtual private networks; virtual applications and operating systems 74 ; and virtual clients 75 .
  • management layer 80 may provide the functions described below.
  • Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment.
  • Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses.
  • Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources.
  • User portal 83 provides access to the cloud computing environment for consumers and system administrators.
  • Service level management 84 provides cloud computing resource allocation and management such that required service levels are met.
  • Service Level Agreement (SLA) planning and fulfillment 85 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
  • SLA Service Level Agreement
  • Workloads layer 90 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91 ; software development and lifecycle management 92 ; virtual classroom education delivery 93 ; data analytics processing 94 ; transaction processing 95 ; and various functions 96 for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing based on the exemplary methods and functions discussed above in conjunction with, e.g., FIG. 3 . Furthermore, in some embodiments, the hardware and software layer 60 would include the computing system 100 of FIG. 1 to implement or otherwise support the various workloads and functions 96 for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing.

Abstract

Matrix inversion systems and methods are implemented using an analog resistive processing unit (RPU) array for hardware accelerated computing. A request is received from an application to compute an inverse matrix of a given matrix, and a matrix inversion process is performed in response to the received request. The matrix inversion process includes storing a first estimated inverse matrix of the given matrix in an array RPU cells, performing a first iterative process on the first estimated inverse matrix stored in the array of RPU cells to converge the first estimated inverse matrix to a second estimated inverse matrix of the given matrix, and reading the second estimated inverse matrix from the array of RPU cells upon completion of the first iterative process. An inverse matrix is returned to the application, wherein the returned inverse matrix is based, at least in part, on the second estimated inverse matrix.

Description

    BACKGROUND
  • This disclosure relates generally to analog resistive processing systems for neuromorphic computing, and techniques for performing hardware accelerated numerical computing tasks using an analog resistive processing system. Information processing systems such as Neuromorphic computing systems and artificial neural network (ANN) systems are utilized in various applications such as machine learning and inference processing for cognitive recognition and computing. Such systems are hardware-based systems that generally include a large number of highly interconnected processing elements (referred to as “artificial neurons”) that operate in parallel to perform various types of computations. The artificial neurons (e.g., pre-synaptic neurons and post-synaptic neurons) are connected using artificial synaptic devices which provide synaptic weights that represent connection strengths between the artificial neurons. The synaptic weights can be implemented using an array of RPU cells having tunable resistive memory devices, the conductance states of the RPU cells are encoded or otherwise mapped to the synaptic weights.
  • SUMMARY
  • Embodiments of the disclosure include computing systems, devices, and methods for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing. For example, an exemplary embodiment includes a method which comprises receiving a request from an application to compute an inverse matrix of a given matrix and performing a matrix inversion process in response to the received request. The matrix inversion process comprises storing a first estimated inverse matrix of the given matrix in an array of resistive processing unit (RPU) cells, performing a first iterative process on the first estimated inverse matrix stored in the array of RPU cells to converge the first estimated inverse matrix to a second estimated inverse matrix of the given matrix, and reading the second estimated inverse matrix from the array of RPU cells upon completion of the first iterative process. An inverse matrix is returned to the application, wherein the returned inverse matrix is based, at least in part, on the second estimated inverse matrix.
  • Another exemplary embodiment includes a device which comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells, and control circuitry. Each RPU cell is connected at an intersection of one of the first control lines and one of the second control lines. Each RPU cell comprises a resistive device with a tunable conductance, wherein conductance values of a least a portion of the resistive devices of the RPU cells in the array of RPU cells encode matrix values of an estimated inverse matrix that is stored in the array of RPU cells. The estimated inverse matrix stored in the array of RPU cells represents an estimate of an inverse matrix of a given matrix. The control circuitry is operatively coupled to the array of RPU cells to cause performance of a first operation and a second operation on the array of RPU cells. The first operation comprises the control circuitry applying a row vector, which represents a row of the given matrix, to inputs of the first control lines to perform a vector matrix operation which comprises multiplying the input row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector on outputs of the second control lines. The second operation comprises the control circuitry applying an error vector to inputs of the second control lines and inputting the row vector to the inputs of the first control lines, to perform an outer product operation of the error vector and the row vector on the array of RPU calls to thereby update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
  • Another exemplary embodiment includes a computing system which comprises a digital processing system and a neuromorphic computing system coupled to the digital processing system. The digital processing system comprises one or more processors and memory to store program instructions that are executed by the one or more processors to perform a matrix inversion process to compute an inverse matrix of a given matrix. The neuromorphic computing system comprises at least one neural core. The at least one neural core comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, second control lines extending in a second direction across the array of RPU cells, and peripheral circuitry coupled to the first control lines and to the second control lines. Each RPU cell is connected at an intersection of one of the first control lines and one of the second control lines, and each RPU cell comprises a resistive device with a tunable conductance, wherein conductance values of a least a portion of the resistive devices of the RPU cells in the array of RPU cells encode matrix values of an estimated inverse matrix that is stored in the array of RPU cells. The estimated inverse matrix stored in the array of RPU cells represents an estimate of the inverse matrix of the given matrix. The digital processing system controls the peripheral circuitry to perform first and second operations of the matrix inversion process on the array of RPU cells. The first operation comprises the digital processing system applying a row vector, which represents a row of the given matrix, to inputs of the first control lines to perform a vector matrix operation which comprises multiplying the row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector on outputs of the second control lines. The second operation comprises the digital processing system applying an error vector to inputs of the second control lines and inputting the row vector to the inputs of the first control lines, to perform an outer product operation of the error vector and the row vector on the array of RPU calls to thereby update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
  • Other embodiments will be described in the following detailed description of exemplary embodiments, which is to be read in conjunction with the accompanying figures.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 schematically illustrates a computing system for performing matrix inversion process using an analog resistive processing unit array for hardware accelerated computing, according to an exemplary embodiment of the disclosure.
  • FIG. 2 schematically illustrates a computing system which comprises an array of resistive processing unit cells that can be used to perform a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • FIG. 3 illustrates a method for performing a matrix inversion process according to an exemplary embodiment of the disclosure.
  • FIG. 4A schematically illustrates a method for performing an analog vector-matrix multiplication operation of a matrix inversion process that is performed on an array of resistive processing unit cells of a computing system, according to an exemplary embodiment of the disclosure.
  • FIG. 4B schematically illustrates a method for performing an analog vector-vector multiplication operation of a matrix inversion process that is performed on an array of resistive processing unit cells of a computing system to update an approximate inverse matrix, according to an exemplary embodiment of the disclosure.
  • FIG. 5A schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process, according to an exemplary embodiment of the disclosure.
  • FIG. 5B schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform an analog vector-vector multiplication operation of a matrix inversion process to update matrix values of an approximate inverse matrix stored in the array of resistive processing unit cells, according to an exemplary embodiment of the disclosure.
  • FIG. 6 schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process using signed matrix values, according to exemplary embodiments of the disclosure.
  • FIG. 7 schematically illustrates an exemplary architecture of a computing node which can host and execute a matrix inversion process according to an exemplary embodiment of the disclosure.
  • FIG. 8 depicts a cloud computing environment according to an exemplary embodiment of the disclosure.
  • FIG. 9 depicts abstraction model layers according to an exemplary embodiment of the disclosure.
  • DETAILED DESCRIPTION
  • Embodiments of the invention will now be described in further detail with regard to systems and methods for performing matrix inversion using an analog resistive processing unit (RPU) array. It is to be understood that the various features as shown in the accompanying drawings are schematic illustrations that are not drawn to scale. Moreover, the same or similar reference numbers are used throughout the drawings to denote the same or similar features, elements, or structures, and thus, a detailed explanation of the same or similar features, elements, or structures will not be repeated for each of the drawings. Further, the term “exemplary” as used herein means “serving as an example, instance, or illustration”. Any embodiment or design described herein as “exemplary” is not to be construed as preferred or advantageous over other embodiments or designs.
  • FIG. 1 schematically illustrates a computing system for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing, according to an exemplary embodiment of the disclosure. In particular, FIG. 1 schematically illustrates a computing system 100 comprising an application 110, a digital processing system 120, and a neuromorphic computing system 130. The digital processing system 120 comprises a plurality of processor cores 122. The neuromorphic computing system 130 comprises a plurality of neural cores 132. Each neural core 132 comprises one or more analog resistive processing unit arrays (e.g., analog RPU crossbar array hardware). The neural cores 132 are configured to support hardware acceleration for computing an inverse matrix by performing multiply-accumulate (MAC) operations in the analog domain to support hardware acceleration of numerical operations such as vector-matrix multiplication, matrix-vector multiplication, vector-vector multiplication, and/or matrix multiplication operations, which are performed on the RPU arrays 134.
  • The digital processing system 120 controls the execution of a matrix inversion process 140 which is performed to compute an inverse matrix for a given matrix A provided by the application 110. The matrix inversion process 140 comprises optimization solver methods, including a stochastic gradient descent (SGD) process 142 and a Newton optimization process 144, which are utilized during the matrix inversion process 140 to compute an inverse matrix. In some embodiments, the SGD process 142 and the Newton optimization process 144 are software modules that are executed by the processor cores 122 of the digital processing system 120 to perform the matrix inversion process 140. The matrix inversion process 140 utilizes the neuromorphic computing system 130 for hardware acceleration to compute an inverse matrix by performing multiply-accumulate (MAC) operations in the analog domain to perform various im-memory computations, such as vector-matrix and vector-vector operations, on an approximate (estimated) inverse matrix Aest −1 that is stored in one or more of the RPU arrays 134, as schematically illustrated in FIG. 1.
  • The application 110 may comprise any type of computing application (e.g., scientific computing applications, engineering applications, graphics rendering applications, signal processing applications, MIMO (Multiple-Input, Multiple-Output) system for wireless communications, cryptography, etc.) which utilizes matrices and inverse matrices as computational objects to perform numerical operations, solve linear equations, and perform other computations. As schematically shown in FIG. 1, during the execution of the application 110, the application 110 may invoke a matrix inversion process 140 to compute an inverse matrix A−1 of a given matrix A which is input to the matrix inversion process 140. In some embodiments, the matrix A comprises a symmetric positive definite (SN)) matrix. An exemplary embodiment of the matrix inversion process 140 will be discussed in further detail below in conjunction with FIG. 3 in some embodiments, the matrix A comprises a symmetric positive definite (SPD) matrix. An SPD matrix is a square symmetric matrix which has positive eigenvalues. SPD matrices arise in many physical and mathematical contexts for which inverse matrices of the SPD matrices are needed to perform computations.
  • In some embodiments, the digital processing system 120 controls the execution of matrix inversion process 140. As an initial step, upon receiving the matrix A from the application 110 requesting a computation of an inverse matrix A−1, the matrix inversion process 140 will configure one or more cores 132 and associated RPU arrays 134 to provide hardware acceleration support for the matrix inversion process. The matrix inversion process 140 computes an initial approximate inverse matrix Aest −1 (e.g., a random inverse matrix) and stores the initial approximate inverse matrix Aest −1 one or more of the RPU arrays 134 of the one or more neural cores 132 which are configured to support the matrix inversion process 140. The SGD process 142 performs an iterative optimization process which utilizes numerical values in row vectors of the given matrix A as training data to train the approximate inverse matrix Aest −1 and perform an iterative update process in the analog domain (e.g., vector-matrix operations, vector-vector operations, etc. performed on the approximate inverse matrix Aest −1 in the RPU arrays 134) to update the values of the approximate inverse matrix Aest −1 based on errors that are computed against row vectors of an identity matrix I. The iterative SGD process 142 converges the approximate inverse matrix Aest −1 to a close approximation of the actual inverse matrix A−1.
  • Upon completion of the SGD process 142, the matrix inversion process 140 reads out the approximate inverse matrix Aest −1 from the RPU arrays 134 of the neuromorphic computing system 130 and performs an optional post-processing optimization in the digital domain using, e.g., the Newton optimization process 144 to further enhance the accuracy of the approximate inverse matrix Aest −1. Upon completion of the optimization process (e.g., the analog optimization and the optional digital optimization), the matrix inversion process 140 returns the optimized approximate inverse matrix Aest −1 as a final inverse matrix Afinal −1. Details of exemplary embodiment of the optimization methods 142 and 144 will be discussed in further detail below with reference to, e.g., FIG. 3.
  • FIG. 2 schematically illustrates a computing system which comprises an array of resistive processing unit cells that can be used to perform a matrix inversion process, according to an exemplary embodiment of the disclosure. For example, FIG. 2 schematically illustrates an exemplary embodiment of a neural core 132 and associated RPU array(s) 134 of the neuromorphic computing system 130 of FIG. 1. More specially, FIG. 2 schematically illustrates a computing system 200 (e.g., neuromorphic computing system) which comprises a two-dimensional (2D) crossbar array of RPU cells 210 arranged in a plurality of rows R1, R2, R3, . . . , Rm, and a plurality of columns C1, C2, C3, Cn. The RPU cells 210 in each row R1, R2, R3, . . . , Rm are commonly connected to respective row control lines RL1, RL2, RL3, . . . , RLm (collectively, row control lines RL). The RPU cells 210 in each column C1, C2, C3, Cn are commonly connected to respective column control lines CL1, CL2, CL3, . . . , CLn (collectively, column control lines CL). Each RPU cell 210 is connected at (and between) a cross-point (or intersection) of a respective one of the row and column control lines. In an exemplary embodiment, the number of rows (m) and the number of columns (n) are the same (i.e., n=m). For example, in some embodiments, the computing system 200 comprises a 4,096×4,096 array of RPU cells 210.
  • The computing system 200 further comprises peripheral circuitry 220 connected to the row control lines RL1, RL2, RL3, . . . , RLm, as well peripheral circuitry 230 connected to the column control lines CL1, CL2, CL3, . . . , CLn. Further, the peripheral circuitry 220 is connected to a data input/output (I/O) interface block 225, and the peripheral circuitry 230 is connected to a data I/O interface block 235. The computing system 200 further comprises control signal circuitry 240 which comprises various types of circuit blocks such as power, clock, bias and timing circuitry to provide power distribution and control signals and clocking signals for operation of the peripheral circuitry 220 and 230 of the computing system 200.
  • In some embodiments, each RPU cell 210 in the computing system 200 comprises a resistive element with a conductance value that is tunable. During operation, some or all of the RPU cells 210 in the computing system 200 comprise respective conductance values that are mapped to respective numerical matrix values of an approximate inverse matrix Aest −1 that is stored in the array of RPU cells 210. In some embodiments, the resistive elements of the RPU cells 210 are implemented using resistive devices such as resistive switching devices (interfacial or filamentary switching devices), ReRAM, memristor devices, phase change memory (PCM) device, etc.) and other types of devices which have a tunable conductance (or tunable resistance level) which can be programmatically adjusted within a range of a plurality of different conductance levels to tune the weight of the RPU cell 210. In some embodiments, the variable conductance elements of the RPU cells 210 can be implemented using ferroelectric devices such as ferroelectric field-effect transistor devices. Furthermore, in some embodiments, the RPU cells 210 can be implemented using an analog CMOS-based framework in which each RPU cell 210 comprises a capacitor and a read transistor. With the framework, the capacitor serves as a memory element of the RPU cell 210 and stores a weight value in the form a capacitor voltage, wherein the capacitor voltage is applied to a gate terminal of the read transistor to modulate a channel resistance of the read transistor based on the level of the capacitor voltage, and wherein the channel resistance of the read transistor represents the conductance of the RPU cell and is correlated to a level of a read current that is generated based on the channel resistance.
  • While the row control lines RL and column control lines CL are each shown in FIG. 2 as a single line for ease of illustration, it is to be understood that each row and column control line can include two or more control lines connected to the RPU cells 210 in the respective rows and columns, depending on the implementation and the specific architecture of the RPU cells 210. For example, in some embodiments, each row control line RL can include a complementary pair of word lines for a given RPU cell 210. Moreover, each column control line CL may comprise multiple control lines including, e.g., one or more source lines (SL) and one or more bit lines (BL).
  • The peripheral circuitry 220 and 230 comprises various circuit blocks which are connected to the respective rows and columns in the 2D array of RPU cells 210, and which are configured to perform vector-matrix multiply functions, matrix-vector multiply functions, and outer product update operations to implement operations needed to perform a matrix inversion process, according to an exemplary embodiment of the disclosure. For example, in some embodiments, to support RPU cell read/sensing operations (e.g., read a weight value of a given RPU cell 210), the peripheral circuitry 220 and 230 comprises pulse-width modulation (PWM) circuitry and read pulse driver circuitry to generate and apply PWM read pulses to the RPU cells 210, in response to input vector values (read input values) received during different operations.
  • More specifically, in some embodiments, the peripheral circuitry 220 and 230 comprises digital-to-analog (D/A) converter circuitry that is configured to receive digital input vectors (to be applied to rows or columns) and convert the digital input vector into analog input vector values that are represented by input voltage voltages of varying pulse width. In some embodiments, a time-encoding scheme is used when input vectors are represented by fixed amplitude Vin=1 V pulses with a tunable duration (e.g., pulse duration is a multiple of 1 ns and is proportional to the value of the input vector). The input voltages applied to rows (or columns) generate output vector values which are represented by output currents, wherein the weights of the RPU cells 210 are read out by measuring the output currents.
  • The peripheral circuitry 220 and 230 further comprises current integrator circuitry and analog-to-digital (A/D) converter circuitry to integrate read currents (IREAD) which are output and accumulated from the connected RPU cells 210 and convert the integrated currents into digital values (read output values) for subsequent computation. In particular, the currents generated by the RPU cells 210 are summed on the columns (or rows) and this total current is integrated over a measurement time, tmeas, by current readout circuitry of the peripheral circuitry 220 and 230. The current readout circuitry comprises current integrators and analog-to-digital (A/D) converters. In some embodiments, each current integrator comprises an operational amplifier that integrates the current output from a given column (or row) (or differential currents from pairs of RPU cells implementing negative and positive weights) on a capacitor, and an analog-to-digital (A/D) converter converts the integrated current (e.g., an analog value) to a digital value.
  • The data I/O interfaces 225 and 235 are configured to interface with a digital processing core, wherein the digital processing core is configured to process input/outputs to the computing system 200 (neural core) and route data between different RPU arrays. The data I/O interfaces 225 and 235 are configured to (i) receive external control signals and data from a digital processing core and provide the received control signals and data to the peripheral circuitry 220 and 230, and (ii) receive digital read output values from peripheral circuitry 220 and 230, and send the digital read output values to a digital processing core for processing.
  • FIG. 3 illustrates a method for performing a matrix inversion process according to an exemplary embodiment of the disclosure. In some embodiments, FIG. 3 illustrates a process flow which is implemented by the computing system 100 of FIG. 1. During run-time execution of a given application, the application may need to perform a computation that requires an inverse matrix A−1 of a given matrix A. The computing system will receive a request from the given application to determine the inverse matrix A−1 of the given matrix A (block 300). The request will include the values of matrix A. In some embodiments, the matrix A comprises a square matrix, e.g., n×n matrix, with n rows and n columns, where n can be relatively large (e.g., 100 or greater). In some embodiments, the matrix A comprises an SPD matrix. The computing system invokes a matrix inversion process (e.g., process 140, FIG. 1) to compute the inverse matrix A−1 of the input matrix A.
  • In some embodiments, the invocation of the matrix inversion process comprises an initial process of configuring the neuromorphic computing system 130 to provide accelerator hardware support for performing the matrix inversion process (block 301). For example, in some embodiments, the digital signal processing system 120 communicates with a programming interface of the neuromorphic computing system 130 to configure one or more neurons and a routing system of the neuromorphic computing system 130 to allocate and configure one or more neural cores to implement one or more interconnected RPU arrays for storing matrix values of an approximate (estimated) inverse matrix Aest −1 and performing in-memory computations on the approximate inverse matrix Aest −1 (alternative referred to as approximate inverse matrix P).
  • In some embodiments, the number of RPU arrays that are allocated and interconnected will vary depending on the size of the matrix A and the size of the RPU arrays. For example, if the RPU arrays have a size of 4096×4096, then one RPU array can be configured to store the values of the approximate inverse matrix Aest −1 and perform a matrix inversion process for an n×n matrix A, where n is 4096 or less. In some embodiments, when the approximate inverse matrix Aest −1 matrix is smaller than the physical RPU on which the approximate inverse matrix Aest −1 matrix is stored, any unused RPU cells can be set to zero and/or unused inputs to the RPU array can be padded by “zero” voltages. In some embodiments, when the size of the n×n matrix A is greater than the size of a single RPU array, then multiple RPU arrays can be operatively interconnected to form an RPU array which is large enough to store the values of the approximate inverse matrix Aest −1 of the n×n matrix A.
  • Next, the matrix inversion process determines an initial approximation of the inverse matrix Aest −1 (e.g., a first estimated inverse matrix) and stores the initial approximate of the inverse matrix Aest −1 on the allocated RPU array(s) of the configured neural core(s) (block 302). In some embodiments, the initial approximate inverse matrix Aest −1 comprises a random matrix that is determined using a predetermined function. For example, in some embodiments, the initial approximated inverse matrix Aest −1 comprises a random matrix which comprises a set of uniformly distributed random numbers with a mean of zero and a variance of 1. In this embodiment, the initial values of the approximate inverse matrix Aest −1 range from 0 to 1. Other techniques can be implemented for computing a random matrix based on, e.g., the given application, the size of the matrix A, etc.
  • A next phase of the exemplary matrix inversion process comprises performing an iteration of an SGD process (e.g., blocks 304, 305, 306, 307) to train the approximate inverse matrix [Aest −1] using the rows of the matrix A as training data, and performing an iterative update process to update the values of the approximate inverse matrix Aest −1 based on errors that are computed against row vectors of an identity matrix I. For ease of illustration and discussion, the process flow of blocks 304, 305, 306, 307 will be described in the context of an exemplary n×n matrix A, wherein n=3, such that:
  • A = [ a 1 1 a 1 2 a 1 3 a 2 1 a 2 2 a 2 3 a 31 a 3 2 a 3 3 ] ,
  • and wherein an exemplary n×n Identity matrix (denoted by I), where:
  • I = [ 1 0 0 0 1 0 0 0 1 ] ,
  • The Identity matrix I comprises a main diagonal with all values on the main diagonal equal to 1, and all other values equal to zero. The Identity matrix I is a square matrix that corresponds to a scalar value of 1 such that for the given n×n matrix A (which is invertible), the inverse of matrix A is an n×n matrix A−1 which satisfies the condition: A×A−1=A−1×A=I. For purposes of illustration and discussion, assume that the approximate inverse matrix Aest −1 is denoted P where:
  • P = [ p 1 1 p 1 2 p 1 3 p 2 1 p 2 2 p 2 3 p 31 p 3 2 p 3 3 ] .
  • Referring to FIG. 3, as an initial step of the exemplary SGD process, an analog vector-matrix multiplication is performed on the RPU array(s) by multiplying the ith row vector of matrix A by the approximate inverse matrix P (block 303). For an n×n matrix A where the matrix A has n rows, the term Ai denotes the ith row of matrix A, where i=[1, 2, . . . , n]. The vector-matrix multiplication Ai×P yields a 1×n vector Yi=[y1, y2, . . . , yn]. In the above example where n=3, and the matrix A has three rows where i=[1, 2, 3] where the first row A1=[a11, a12, a13], the vector-matrix multiplication A1×P:
  • [ a 11 a 1 2 a 1 3 ] [ p 1 1 p 1 2 p 1 3 p 2 1 p 2 2 p 2 3 p 3 l p 3 2 p 3 3 ]
  • yields a 1×3 vector Y1=[y1, y2, y3], wherein:
  • y 1 = ( a 11 p 1 1 ) + ( a 1 2 p 2 1 ) + ( a 1 3 p 3 1 ) y 2 = ( a 11 p 1 2 ) + ( a 1 2 p 2 2 ) + ( a 1 3 p 3 2 ) y 3 = ( a 11 p 1 3 ) + ( a 1 2 p 2 3 ) + ( a 1 3 p 3 3 )
  • Ideally, if the approximate inverse matrix P was the actual inversion matrix A−1 of the matrix A, the ith entry (e.g., y1) of the vector Y1 would be 1 and all other entries would be 0 (i.e., Y1=[1, 0, . . . , 0]). However, since there will likely be some difference between the actual values of the vector Y1=[y1, y2, . . . , yn] and the target values [1, 0, . . . , 0], an iteration of an update process is performed to update numerical values of the approximate inverse matrix P.
  • In particular, the resulting vector Yi=[y1, y2, . . . , yn] is output to the SOD process 142 executing in the digital processing 120 wherein an error vector δ is computed in the digital domain based on a difference between the output vector Yi=[y1, y2, . . . , yn] and a target row vector Ii of I (block 304). More specifically, in some embodiments, an error vector δi=[δ1, δ2, . . . , δn] is computed as (Ai×P)−Ii), wherein Ii denotes the ith row of I, where i=[1, 2, . . . , n]. In the above example where n=3 and A1×P yields the vector Y1=[y1, y2, y3], the error vector δ is computed as:
  • δ1 -= [ ( A 1 P ) - I 1 ] = [ Y 1 - I 1 ] = [ ( y 1 - 1 ) , ( y 2 - 0 ) , ( y 3 - 0 ) ] .
  • Next, the approximate inverse matrix P is updated by performing an analog outer product of the error vector δi and a transpose Ai T of the current row vector Ai (block 305). More specifically, in some embodiments, an update process is performed as follows:
  • P := P + η A i T × 6 i ,
  • where P′ denotes an update to the current approximate inverse matrix P, where δi=[δ1, δ2, . . . , δn] denotes the error vector computed as (Ai×P)−Ii, and where η denotes a learning rate (e.g., ranging from 0.01 to 0.001). The learning rate η is scale factor that is applied to the values of the transposed row vector Ai T. In this example, the transposed row vector Ai T represents the ith column of a transpose AT of the matrix A.
  • The process flow of blocks 303, 304 and 305 is performed for all rows of the matrix A in sequence, wherein each iteration of blocks 303, 304, and 305 over all the rows of the matrix A comprises one “epoch.” In this regard, each “epoch” results in n updates to the approximate inverse matrix P as follows:
  • P := P + η A 1 T × 61 , P := P + η A 2 T × 62 P := P + η A n T × 6 n .
  • After each iteration of blocks 303, 304, and 305 (i.e., each iteration of the SGD process), a determination is made as to whether all the row vectors of the matrix A have been processed for the given epoch (block 306). If all rows have not yet been processed for the given epoch (negative determination in block 306), the next row vector Ai of the matrix A will be input to the RPU array(s) (block 307) to perform the next vector-matrix multiplication on the RPU array(s) (return to block 303).
  • On the other hand, if it is determined that all rows have been processed for the given epoch (affirmative determination in block 306), a determination is made as to whether a predefined number of epochs have been performed (block 308). In some embodiments, the SGD process is implemented to train the approximate inverse matrix P for a predetermined number of epochs in which it is assumed that the approximate inverse matrix P converges to a sufficient approximation of the actual inverse matrix A−1 with an analog accuracy (e.g., error sum of squares (SSE)) that is acceptable for the given application, or which otherwise provides a sufficiently accurate approximation of the actual inverse matrix A−1 to be utilized as a seed in a digital optimization process that may be applied in the digital domain to obtain an approximation of the actual inverse matrix A−1 with even greater accuracy. In an exemplary embodiment, a predefined number of epochs can be, e.g., 100 epochs. However, the predefined number of epochs will vary depending on the application and the degree of accuracy needed from the analog SGD training process. In some embodiments, the predefined number of epochs that is utilized for a given application can be determined based historical empirical information from which one may determine a number of epochs that may result in an approximate inverse matrix A−1 with a target accuracy.
  • If it is determined that the predefined number of epochs have not been performed (negative determination in block 308), a next epoch will be performed starting with inputting the first row vector A1 of the matrix A to the RPU array(s) and performing the analog vector-matrix multiplication operation (return to block 303). On the other hand, if it is determined that the predefined number of epochs have been performed (affirmative determination in block 308), the approximate inverse matrix P (e.g., second estimated inverse matrix) will be read out from the RPU array(s) (block 309). In some embodiments, the approximate inverse matrix P which is read out from the RPU array(s) (e.g., the second estimated inverse matrix) will be returned to the requesting application as the final inverse matrix Afinal −1. In this instance, the approximate inverse matrix P that is computed in the analog domain using the SGD process may be deemed sufficiently accurate for the given application.
  • In some embodiments, the approximate inverse matrix P (e.g., second estimated inverse matrix) which is read out from the RPU array(s) will be further processed in the digital domain by performing an iterative digital optimization process to converge the approximate inverse matrix P to a final inverse matrix Afinal −1 (e.g., third estimated inverse matrix) which is more accurate than the approximate inverse matrix P (e.g., second estimated inverse matrix) which is read out from the RPU array(s) (block 310). For example, in some embodiments, digital post-processing is performed using Newton's optimization process.
  • More specifically, in some embodiments, the digital optimization is performed using an iterative process as follows: Pi+1:=2Pi−PiAPi, where i=(0, 1, 2, . . . ) denotes the ith iteration, wherein i=0 represents a first iteration P1 performed on the approximate inverse matrix P (e.g., P0) initially read out from the read out from the RPU array(s).
  • In some embodiments, after each iterative computation, an error checking process is performed which comprises (i) computing an error based on (Pi+1×A)−I, and (ii) comparing the computed error to an error threshold value. Each iteration of the digital optimization should result in the matrix multiplication (Pi+1×A) becoming increasingly closer to the identity matrix I. If the computed error (following a given iteration) is less than or equal to the error threshold value, the digital optimization process can terminate, and the last computed approximate inverse matrix Pi+1 is returned as the final inverse matrix Afinal −1 to the requesting application (block 311). In some embodiments, the error threshold value is based on, e.g., a target bit precision (e.g., 64-bit precision, 128-bit precision, etc.) which is desired for the given application.
  • FIGS. 4A, 4B, 5A and 5B schematically illustrate the vector-matrix multiplication and outer product (update) computations that are performed on an array of RPU cells to implement the process flow of blocks 303, 304, and 305 (FIG. 3). In particular, FIG. 4A schematically illustrates a vector-matrix multiplication that is performed on an array of RPU cells 210 of the computing system 200 of FIG. 2. The conductance values of the RPU cells 210 are mapped to respective matrix elements 212 of an approximate inverse matrix P that is stored in the array of RPU cells 210. Each RPU cell 210 performs a local multiplication operation 214 and local summation operation 216 to perform a vector matrix multiplication operation (e.g., block 303, FIG. 3). In particular, FIG. 4A illustrates a vector-matrix multiplication y=x×P, where x denotes a 1×n row vector of the matrix A, where P denotes the approximate inverse of the matrix A (wherein the matrix elements 212 of the RPU cells 210 are encoded by the respective conductance values of the RPU cells 210), and where y denotes a 1×n vector that is generated as a result of the vector-matrix multiplication y=x×P. As schematically illustrated in FIG. 4A, the input row vector x is applied to the row lines, and the output vector y is output on the column lines. For ease of illustration, FIG. 4A shows two elements (xi, xi+1) of the input row vector x and the corresponding two elements (yi, yi+1) of the output vector y which are generated as a result of the vector-matrix multiplication.
  • Further, FIG. 4B illustrates a vector-vector multiplication (outer product) that is performed on the array of RPU cells 210 of the computing system 200 of FIG. 2 to the matrix values 210 of the approximate inverse matrix P that is stored in the array of RPU cells 210. More specifically, FIG. 4B schematically illustrates an update process P′:=P+ηx×δ, where x denotes the 1×n row vector of the matrix A applied to the rows for the vector-matrix application, but scaled with a parameter η (a global learning rate), and where δ is a 1×n error vector that is applied to the column lines. The vector-vector outer product comprises multiplication operation and an incremental weight update to be performed locally at each cross-point (RPU cell 210) in the computing system 200 of FIG. 2. The update process is computed as: pij←pij+ηxi×δj, where pij represents the matrix value of P stored in the RPU cell 210 at the cross point of the ith row and the jth column (for simplicity layer index is omitted). For the updated process, voltage pulses representing vectors x and δ are simultaneously applied to the rows and the columns, wherein each RPU cell 210 performs a local multiplication and summation operation by processing the voltage pulses coming from the column and the row and hence achieving an incremental weight update.
  • In some embodiments, to determine the product of the xi and δj vectors for the incremental update process, stochastic translator circuitry in the peripheral circuitry 220 and 230 is utilized to generate stochastic bit streams that represent the input vectors xi and δj. The stochastic bits streams for the vectors xi and δj are applied to the rows and columns in the 2D crossbar array of RPU cells 210, wherein the conductance value of a given RPU cell 210 (and hence, the corresponding matrix value) will change depending on the coincidence of the xi and δj stochastic pulse streams input to the given RPU cell 210. The vector cross product operations for the update operation are implemented based on the known concept that coincidence detection (using an AND logic gate operation) of stochastic streams representing real numbers is equivalent to a multiplication operation.
  • FIG. 5A schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process, according to an exemplary embodiment of the disclosure. In particular, FIG. 5A schematically illustrates a computing system 500 which comprises a crossbar array of RPU cells 505, wherein each RPU cell 510 in the array 505 comprises an analog non-volatile resistive element (represented as a variable resistor having a tunable conductance G) at the intersection of each row (R1, R2, . . . , Rn) and column (C1, C2, . . . , Cn). As depicted in FIG. 5A, the array of RPU cells 505 provides a matrix of conductance values Gij which are mapped to matrix values of an approximate inverse matrix P, which are encoded by the conductance values Gij (where i represents a row index and j denotes a column index) of the respective RPU cells 510.
  • To perform a vector-matrix multiplication process for the matrix inversion process (e.g., block 303 (FIG. 3), FIG. 4A), multiplexers in the peripheral circuitry of the computing system 500 are activated to selectively connect row line driver circuitry 520 to the row lines R1, R2, . . . , Rm. The row line driver circuitry 520 comprises plurality of digital-to-analog (DAC) circuit blocks 522-1, 522-2, . . . , 522-n (collectively DAC circuit blocks 522) which are connected to respective row lines R1, R2, . . . , Rn. In addition, multiplexers in the peripheral circuitry of the computing system 500 are activated to selectively connect readout circuitry 530 to the column lines C1, C2, . . . , Cn. The readout circuitry 530 comprises a plurality of readout circuit blocks 530-1, 530-2, . . . , 530-n, which are connected to respective column lines C1, C2, . . . , Cn. The readout circuit blocks 530-1, 530-2, . . . , 530-n comprise respective current integrator circuitry 532-1, 532-2, . . . , 532-n, and respective analog-to-digital (ADC) circuitry 534-1, 534-2, . . . , 534-n. The current integrator circuitry comprises current integrator circuit blocks, wherein each current integrator comprises an operational transconductance amplifier (OTA) with negative capacitive feedback to convert an input current (aggregate column current) to an output voltage on output node of the current integrator circuit, wherein at the end of an integration period, each ADC circuit latches in the output voltage generated at an output node of a respective current integrator circuit quantizes the output voltage to generate a digital output signal.
  • In the exemplary configuration of FIG. 5A, a vector-matrix multiplication process for a matrix inversion process is performed by inputting a digital input vector x=[x1, x2, . . . , xn] to the computing system 500, wherein x=[x1, x2, . . . , xn] represents a 1×n row vector of a given row of the matrix A). The digital signals x1, x2, . . . , xn are input to respective DAC circuit blocks 522-1, 522-2, . . . , 522-m which generate analog voltages V(t) (voltage as a function of time) signal on the row lines R1, R2, . . . , Rn which are proportional to the values of x=[x1, x2, . . . , xn]. In some embodiments, the DAC circuit blocks 522-1, 522-2, . . . , 522-n each comprise a pulse-width modulation circuitry and driver circuitry which is configured to generate pulse-width modulated (PWM) read pulses V1, V2, . . . , Vn that are applied to the respective row lines R1, R2, . . . , Rn.
  • More specifically, in some embodiments, the DAC circuit blocks 522-1, 522-2, . . . , 522-n are configured to perform a digital-to-analog conversion process using a time-encoding scheme where the input vectors are represented by fixed amplitude pulses (e.g., V=1V) with a tunable duration, wherein the pulse duration is a multiple of a prespecified time period (e.g., 1 nanosecond) and is proportional to the value of the input vector. For example, a given digital input value of 0.5 can be represented by a voltage pulse of 4 ns, while a digital input value of 1 can be represented by a voltage pulse of 80 ns (e.g., a digital input value of 1 can be encoded to an analog voltage pulse with a pulse duration that is equal to the integration time Tmeas). As shown in FIG. 5A, the resulting analog input voltages V1, V2, . . . , Vn (e.g., read pulses), are applied to the row lines R1, R2, . . . , Rn.
  • To perform a vector-matrix multiplication, the analog input voltages V1, V2, . . . , Vn (e.g., read pulses), are applied to the row lines R1, R2, . . . , Rn, wherein each RPU cell 510 generates a corresponding read current IREAD=Vi×Gij (based on Ohm's law), wherein Vi denotes the analog input voltage applied to the given RPU cell 510 on the given row i and wherein Gij denotes the conductance value of the given RPU cell 510 (at the given row i and column j). As shown in FIG. 5A, the read currents that are generated by the RPU cells 510 on each column j are summed together (based on Kirchhoff s current law) to generate respective currents I1, I2, . . . , In at the output of the respective columns C1, C2, . . . , Cn. In this manner, the resulting column currents I1, I2, . . . , In represent the result of a vector-matrix multiplication operation that is performed (to multiply the given row vector of the matrix A with the approximate inverse matrix P), wherein the input analog voltage vector [V1, V2, . . . , Vn] is multiplied by the conductance matrix G (of conductance values Gij) to generate and output an analog current vector [I1, I2, . . . , In], as illustrated in FIG. 5A. In particular, a given column current Ij is computed as Iji=1 n Vi Gij. For example, the column current I1 for the first column C1 is determined as I1=(V1G11+V2 G21+, . . . , +Vn Gn1).
  • The resulting aggregate read currents I1, I2, . . . , In at the output of the respective columns C1, C2, . . . , Cn are input to respective readout circuit blocks 530-1, 530-2, 530-n of the readout circuitry 530. The aggregate read currents I1, I2, . . . , In are integrated by the respective current integrator circuits 532-1, 532-2, . . . , 532-n to generate respective output voltages, which are quantized by the respective ADC circuits 534-1, 534-2, . . . , 534-n to generate respective digital output signals y1, y2, . . . , yn of an output vector y. The digital output signals y1, y2, . . . , yn are transmitted to the digital processing system and processed to determine the errors. As noted above, in some embodiments, the error is determined as a difference between the output values y1, y2, . . . , yn and target values for a corresponding row of the identity matrix I. The digital processing system generates a digital error vector δ which is then applied to the computing system 500 to perform an update process, as schematically illustrated in FIG. 5B
  • FIG. 5B schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform an inverse matrix update process for a matrix inversion process, according to an exemplary embodiment of the disclosure. More specifically, FIG. 5B schematically illustrates a method for configuring the computing system 500 to perform a vector-vector outer product computation (e.g., block 305 (FIG. 3), FIG. 4B) to incrementally update matrix values of the approximate inverse matrix P stored in the RPU array 505. FIG. 5B schematically illustrates a configuration of the computing system 500 in which multiplexers in the peripheral circuitry of the computing system 500 are activated to selectively connect column line driver circuitry 540 to the column lines C1, C2, . . . , Cn. The column line driver circuitry 540 comprises a plurality of DAC circuit blocks 542-1, 542-2, . . . , 542-n (collectively DAC circuit blocks 542) which are connected to respective column lines C1, C2, . . . , Cn. As further shown in FIG. 5B, for the update operation, the DAC circuit blocks 522-1, 522-2, . . . , 522-n are connected to the respective row lines R1, R2, . . . , Rn. The DAC circuit blocks 542 perform the same function as the DAC circuit blocks 522 as described above.
  • As shown in FIG. 5B, for the update operation (vector-vector outer product operation) performed on the array of RPU cells 305 of the computing system 500, the vector of error signals δ=[δ1, δ2, . . . , δn] provided by the digital processing system is input the DAC circuitry 542, wherein the digital error signals δ1, δ2, . . . , δn are input to the respective DAC circuit blocks 542-1, 542-2, . . . , 542-n which are connected to the respective columns C1, C2, . . . , Cn. The DAC circuit blocks 542-1, 542-2, 542-n generate analog voltages V1, V2, . . . , Vn using the same or similar time-encoding techniques as discussed above to generate pulse-modulated voltage pulses (with a same amplitude but tunable pulse width) which correspond to the values of the respective digital error signals δ1, δ2, . . . , δn.
  • Moreover, for the update operation, the 1×n row vector of the matrix A applied to the rows for the vector-matrix application are again input to but scaled with the learning parameter η. In particular, as shown in FIG. 5B, digital signals ηx1, ηx2, . . . , ηxn are input to respective DAC circuit blocks 522-1, 522-2, . . . , 522-n. The update process is performed on the RPU array 505 by performing a vector-vector outer product operation which involves multiplying the input vector ηx=[ηx1, ηx2, . . . , ηxn] with the error vector δ=[δ1, δ2, . . . , δn] to update conductance values of the values of the RPU cells 510 and, thereby, incrementally update matrix values of the approximate inverse matrix P stored in the RPU array 505. Various methods can be used to generate the analog voltage pulses (e.g., stochastic pules) and implement the vector-vector multiplication in the analog domain, the details of which are not necessary for one of ordinary skill in the art to understand the exemplary embodiments for computing inverse matrices, as discussed herein.
  • While FIG. 5A schematically illustrates an exemplary method for generating the aggregate column currents for a vector-matrix multiplication operation performed in the analog domain, other techniques can be implemented to generate the aggregate column currents using differential current techniques that allow for “signed matrix value.” For example, FIG. 6 schematically illustrates a method for configuring a computing system comprising an array of resistive processing unit cells to perform a vector-matrix operation for a matrix inversion process using signed matrix values, according to exemplary embodiments of the disclosure. In particular, FIG. 6 schematically illustrates a method for generating an aggregate column current ICOL1 using different column currents I1 + and I1 from corresponding columns C1 + and C1 of two separate RPU arrays 510 and 610, wherein the conductance is determined as (G+−G). FIG. 6 schematically illustrates a differential read scheme in which an aggregate column current ICOL1 that is input to the readout circuit block 530-1 is determined as ICOL1=I1 +−I1 . With this differential scheme, the magnitude of ICOL1 will correspond to a given matrix value, wherein the sign of the matrix value will depend on the whether I1 is greater than, equal to, or less than, I1 . A positive sign (ICOL1>0) will be obtained when I1>I1 . A zero value (ICOL1=0) will be obtained when I1=I1 . A negative sign (ICOL1<0) will be obtained when I1<I1 .
  • More specifically, in the exemplary embodiment of FIG. 6, each RPU cell 510 in the computing system 500 of FIG. 5A comprises two unit RPU cells 510-1 and 510-2 which have respective conductance values Gij + and Gij , wherein the conductance value of a given RPU cell 510 is determined as the difference between the respective conductance values, i.e., Gij=Gij +−Gij where i and j are indices within the RPU array 505. In this way, negative and positive weights can be readily encoded using positive-only conductance values. In other words, since the conductance values of the resistive devices of the RPU cells can only be positive, the differential scheme in FIG. 6 implements a pair of identical RPU device arrays to encode positive (Gij +) and negative (Gij ) matrix values, wherein the matrix value (Gij) of a given RPU cell is proportional to a difference of two conductance values stored in two corresponding devices (Gij +−Gij ) located in identical positions of the pair of RPU arrays 510 and 610 (wherein the two RPU arrays 510 and 610 can be stacked on top of each other in a back-end-of-line metallization structure of a chip). In this instance, a single RPU tile is deemed a pair of RPU arrays with the peripheral circuits that support the parallel operation of the array in all three cycles.
  • A shown in FIG. 6, positive voltage pulses (V1, V2, . . . , Vn) and corresponding negative voltage pulses (−V1, −V2, . . . , −Vn) are supplied separately to the RPU cells 510-1 and 510-2 in corresponding rows in the identical RPU arrays 510 and 610 that are used to encode positive and negative inverse matrix values. The aggregate column currents I1 + and I1 that are output from the corresponding first columns C1 + and C1 in the respective RPU arrays 510 and 610 are combined to generate a differential aggregate current ICOL1 which is input to the readout circuit block 530-1 connected to the corresponding first columns C1 + and C1 .
  • In some embodiments where complex matrices are implemented (e.g., a complex matrix which comprises a real part and an imaginary part), the RPU framework of FIG. 6 can be implemented to store real and imaginary matrix values in two distinct RPU arrays. For example, in the exemplary embodiment of FIG. 6, the first RPU array 510 can be configured to store the real matrix values of a complex matrix, while the second RPU array is configured to store the imaginary matrix values of the complect matrix. IN this manner, the respective parts can then be processed separately, making it possible to obtain a conjugate transpose A* and a pseudoinverse A.
  • Exemplary embodiments of the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
  • The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
  • Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
  • Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, configuration data for integrated circuitry, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++, or the like, and procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
  • Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
  • These computer readable program instructions may be provided to a processor of a computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
  • The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the Figures. For example, two blocks shown in succession may, in fact, be accomplished as one step, executed concurrently, substantially concurrently, in a partially or wholly temporally overlapping manner, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
  • These concepts are illustrated with reference to FIG. 7, which schematically illustrates an exemplary architecture of a computing node which can host and execute a matrix inversion process according to an exemplary embodiment of the disclosure. FIG. 7 illustrates a computing node 700 which comprises a computer system/server 712, which is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with computer system/server 712 include, but are not limited to, personal computer systems, server computer systems, thin clients, thick clients, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputer systems, mainframe computer systems, and distributed cloud computing environments that include any of the above systems or devices, and the like.
  • Computer system/server 712 may be described in the general context of computer system executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, and so on that perform particular tasks or implement particular abstract data types. Computer system/server 712 may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
  • In FIG. 7, computer system/server 712 in computing node 700 is shown in the form of a general-purpose computing device. The components of computer system/server 712 may include, but are not limited to, one or more processors or processing units 716, a system memory 728, and a bus 718 that couples various system components including system memory 728 to the processors 716.
  • The bus 718 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, and not limitation, such architectures include Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnects (PCI) bus.
  • The computer system/server 712 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computer system/server 712, and it includes both volatile and non-volatile media, removable and non-removable media.
  • The system memory 728 can include computer system readable media in the form of volatile memory, such as random access memory (RAM) 730 and/or cache memory 732. The computer system/server 712 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 734 can be provided for reading from and writing to a non-removable, non-volatile magnetic media (not shown and typically called a “hard drive”). Although not shown, a magnetic disk drive for reading from and writing to a removable, non-volatile magnetic disk (e.g., a “floppy disk”), and an optical disk drive for reading from or writing to a removable, non-volatile optical disk such as a CD-ROM, DVD-ROM or other optical media can be provided. In such instances, each can be connected to bus 718 by one or more data media interfaces. As depicted and described herein, memory 728 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the invention.
  • The program/utility 740, having a set (at least one) of program modules 742, may be stored in memory 728 by way of example, and not limitation, as well as an operating system, one or more application programs, other program modules, and program data. Each of the operating system, one or more application programs, other program modules, and program data or some combination thereof, may include an implementation of a networking environment. Program modules 742 generally carry out the functions and/or methodologies of embodiments of the disclosure as described herein.
  • Computer system/server 712 may also communicate with one or more external devices 714 such as a keyboard, a pointing device, a display 724, etc., one or more devices that enable a user to interact with computer system/server 712, and/or any devices (e.g., network card, modem, etc.) that enable computer system/server 712 to communicate with one or more other computing devices. Such communication can occur via Input/Output (I/O) interfaces 722. Still yet, computer system/server 712 can communicate with one or more networks such as a local area network (LAN), a general wide area network (WAN), and/or a public network (e.g., the Internet) via network adapter 720. As depicted, network adapter 720 communicates with the other components of computer system/server 712 via bus 718. It should be understood that although not shown, other hardware and/or software components could be used in conjunction with computer system/server 712. Examples, include, but are not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, SSD drives, and data archival storage systems, etc.
  • Additionally, it is to be understood that although this disclosure includes a detailed description on cloud computing, implementation of the teachings recited herein are not limited to a cloud computing environment. Rather, embodiments of the present invention are capable of being implemented in conjunction with any other type of computing environment now known or later developed.
  • Cloud computing is a model of service delivery for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, network bandwidth, servers, processing, memory, storage, applications, virtual machines, and services) that can be rapidly provisioned and released with minimal management effort or interaction with a provider of the service. This cloud model may include at least five characteristics, at least three service models, and at least four deployment models.
  • Characteristics are as follows:
  • On-demand self-service: a cloud consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with the service's provider.
  • Broad network access: capabilities are available over a network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs).
  • Resource pooling: the provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to demand. There is a sense of location independence in that the consumer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter).
  • Rapid elasticity: capabilities can be rapidly and elastically provisioned, in some cases automatically, to quickly scale out and rapidly released to quickly scale in. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be purchased in any quantity at any time.
  • Measured service: cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
  • Service Models are as follows:
  • Software as a Service (SaaS): the capability provided to the consumer is to use the provider's applications running on a cloud infrastructure. The applications are accessible from various client devices through a thin client interface such as a web browser (e.g., web-based e-mail). The consumer does not manage or control the underlying cloud infrastructure including network, servers, operating systems, storage, or even individual application capabilities, with the possible exception of limited user-specific application configuration settings.
  • Platform as a Service (PaaS): the capability provided to the consumer is to deploy onto the cloud infrastructure consumer-created or acquired applications created using programming languages and tools supported by the provider. The consumer does not manage or control the underlying cloud infrastructure including networks, servers, operating systems, or storage, but has control over the deployed applications and possibly application hosting environment configurations.
  • Infrastructure as a Service (IaaS): the capability provided to the consumer is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, deployed applications, and possibly limited control of select networking components (e.g., host firewalls).
  • Deployment Models are as follows:
  • Private cloud: the cloud infrastructure is operated solely for an organization. It may be managed by the organization or a third party and may exist on-premises or off-premises.
  • Community cloud: the cloud infrastructure is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations). It may be managed by the organizations or a third party and may exist on-premises or off-premises.
  • Public cloud: the cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
  • Hybrid cloud: the cloud infrastructure is a composition of two or more clouds (private, community, or public) that remain unique entities but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load-balancing between clouds).
  • A cloud computing environment is service oriented with a focus on statelessness, low coupling, modularity, and semantic interoperability. At the heart of cloud computing is an infrastructure that includes a network of interconnected nodes.
  • Referring now to FIG. 8, illustrative cloud computing environment 50 is depicted. As shown, cloud computing environment 50 includes one or more cloud computing nodes 10 with which local computing devices used by cloud consumers, such as, for example, personal digital assistant (PDA) or cellular telephone 54A, desktop computer 54B, laptop computer 54C, and/or automobile computer system 54N may communicate. Nodes 10 may communicate with one another. They may be grouped (not shown) physically or virtually, in one or more networks, such as Private, Community, Public, or Hybrid clouds as described hereinabove, or a combination thereof. This allows cloud computing environment 50 to offer infrastructure, platforms and/or software as services for which a cloud consumer does not need to maintain resources on a local computing device. It is understood that the types of computing devices 54A-N shown in FIG. 8 are intended to be illustrative only and that computing nodes 10 and cloud computing environment 50 can communicate with any type of computerized device over any type of network and/or network addressable connection (e.g., using a web browser).
  • Referring now to FIG. 9, a set of functional abstraction layers provided by cloud computing environment 50 (FIG. 8) is shown. It should be understood in advance that the components, layers, and functions shown in FIG. 9 are intended to be illustrative only and embodiments of the invention are not limited thereto. As depicted, the following layers and corresponding functions are provided:
  • Hardware and software layer 60 includes hardware and software components. Examples of hardware components include: mainframes 61; RISC (Reduced Instruction Set Computer) architecture based servers 62; servers 63; blade servers 64; storage devices 65; and networks and networking components 66. In some embodiments, software components include network application server software 67 and database software 68.
  • Virtualization layer 70 provides an abstraction layer from which the following examples of virtual entities may be provided: virtual servers 71; virtual storage 72; virtual networks 73, including virtual private networks; virtual applications and operating systems 74; and virtual clients 75.
  • In one example, management layer 80 may provide the functions described below. Resource provisioning 81 provides dynamic procurement of computing resources and other resources that are utilized to perform tasks within the cloud computing environment. Metering and Pricing 82 provide cost tracking as resources are utilized within the cloud computing environment, and billing or invoicing for consumption of these resources. In one example, these resources may include application software licenses. Security provides identity verification for cloud consumers and tasks, as well as protection for data and other resources. User portal 83 provides access to the cloud computing environment for consumers and system administrators. Service level management 84 provides cloud computing resource allocation and management such that required service levels are met. Service Level Agreement (SLA) planning and fulfillment 85 provide pre-arrangement for, and procurement of, cloud computing resources for which a future requirement is anticipated in accordance with an SLA.
  • Workloads layer 90 provides examples of functionality for which the cloud computing environment may be utilized. Examples of workloads and functions which may be provided from this layer include: mapping and navigation 91; software development and lifecycle management 92; virtual classroom education delivery 93; data analytics processing 94; transaction processing 95; and various functions 96 for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing based on the exemplary methods and functions discussed above in conjunction with, e.g., FIG. 3. Furthermore, in some embodiments, the hardware and software layer 60 would include the computing system 100 of FIG. 1 to implement or otherwise support the various workloads and functions 96 for performing a matrix inversion process using an analog resistive processing unit array for hardware accelerated computing.
  • The descriptions of the various embodiments of the present disclosure have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (25)

What is claimed is:
1. A method comprising:
receiving a request from an application to compute an inverse matrix of a given matrix;
performing a matrix inversion process in response to the received request, wherein the matrix inversion process comprises:
storing a first estimated inverse matrix of the given matrix in an array of resistive processing unit (RPU) cells;
performing a first iterative process on the first estimated inverse matrix stored in the array of RPU cells to converge the first estimated inverse matrix to a second estimated inverse matrix of the given matrix; and
reading the second estimated inverse matrix from the array of RPU cells upon completion of the first iterative process; and
returning an inverse matrix to the application, wherein the returned inverse matrix is based, at least in part, on the second estimated inverse matrix.
2. The method of claim 1, wherein the first iterative process comprises a stochastic gradient descent optimization process which comprises utilizing row vectors of the given matrix as training data to train the first estimated inverse matrix stored in the array of RPU cells and update matrix values of the first estimated inverse matrix stored in the array of RPU cells by utilizing error vectors that are determined based on matrix values of an identity matrix.
3. The method of claim 1, wherein performing the first iterative process comprises performing an iteration of the first iterative process, wherein performing the iteration comprises:
performing a vector matrix operation on the array of RPU cells, wherein performing the vector matrix operation comprises inputting a row vector, which represents a row of the given matrix, to the array of RPU cells, and multiplying the input row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector which is output from the array of RPU cells; and
performing an update operation to update matrix values of the estimated inverse matrix stored in the array of RPU cells, wherein performing the update operation comprises determining an error vector based on a difference between the output vector and a row vector of an identity matrix, inputting both the row vector of the given matrix and the determined error vector to the array of RPU cells, and performing an outer product operation of the row vector and the determined error vector on the array of RPU cells to update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
4. The method of claim 3, wherein performing the update operation further comprises:
scaling values of the row vector according to a learning rate parameter to generate a scaled row vector; and
performing the outer product operation of the scaled row vector and the determined error vector on the array of RPU cells to update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
5. The method of claim 3, further comprising repeating the iteration of the first iterative process for each row vector of the given matrix to complete an epoch of the first iterative process.
6. The method of claim 3, further comprising performing the first iterative process for a prespecified number of epochs, wherein the first iterative process is deemed complete following completion of the prespecified number of epochs.
7. The method of claim 1, wherein the given matrix comprises a symmetric positive definite matrix.
8. The method of claim 1, wherein storing the first estimated inverse matrix of the given matrix in the array of RPU cells comprises:
computing a random matrix; and
storing the random matrix in the array of RPU cells as an initial estimate of the inverse matrix of the given matrix.
9. The method of claim 1, further comprising performing a second iterative process on the second estimated inverse matrix read out from the array of RPU cells to thereby generate a third estimated inverse matrix that has an accuracy which is greater than an accuracy of the second estimated inverse matrix, wherein the second iterative process comprises an optimization process that is performed in a digital domain, and wherein the third estimated inverse matrix comprises the inverse matrix which is returned to the application.
10. The method of claim 9, wherein the second iterative process comprise a Newton optimization process.
11. A computer program product for performing a matrix inversion process, the computer program product comprising:
one or more computer readable storage media, and program instructions collectively stored on the one or more computer readable storage media, the program instructions comprising:
program instructions to receive a request from an application to compute an inverse matrix of a given matrix;
program instructions to perform a matrix inversion process in response to the received request, wherein the program instructions to perform the matrix inversion process comprise:
program instructions to store a first estimated inverse matrix of the given matrix in an array of resistive processing unit (RPU) cells;
program instructions to perform a first iterative process on the first estimated inverse matrix stored in the array of RPU cells to converge the first estimated inverse matrix to a second estimated inverse matrix of the given matrix; and
program instruction to read the second estimated inverse matrix from the array of RPU cells upon completion of the first iterative process; and
program instructions to return an inverse matrix to the application, wherein the returned inverse matrix is based, at least in part, on the second estimated inverse matrix.
12. The computer program product of claim 11, wherein the first iterative process comprises a stochastic gradient descent optimization process which comprises utilizing row vectors of the given matrix as training data to train the first estimated inverse matrix stored in the array of RPU cells and update matrix values of the first estimated inverse matrix stored in the array of RPU cells by utilizing error vectors that are determined based on matrix values of an identity matrix.
13. The computer program product of claim 11, wherein the program instructions to perform the first iterative process comprise program instructions to perform an iteration of the first iterative process, wherein the program instructions to perform the iteration comprise:
program instructions to perform a vector matrix operation on the array of RPU cells, wherein the program instructions to perform the vector matrix operation comprise program instructions to input a row vector, which represents a row of the given matrix, to the array of RPU cells, and multiply the input row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector which is output from the array of RPU cells; and
program instructions to perform an update operation to update matrix values of the estimated inverse matrix stored on the array of RPU cells, wherein the program instructions to perform the update operation comprise program instructions to determine an error vector based on a difference between the output vector and a row vector of an identity matrix, to input both the row vector of the given matrix and the determined error vector to the array of RPU cells, and to perform an outer product operation of the row vector and the determined error vector on the array of RPU cells to update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
14. The computer program product of claim 13, wherein the program instructions to perform the update operation further comprise:
program instructions to scale values of the row vector according to a learning rate parameter to generate a scaled row vector; and
program instructions to perform the outer product operation of the scaled row vector and the determined error vector on the array of RPU cells to update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
15. The computer program product of claim 13, further comprising program instructions to repeat the iteration of the first iterative process for each row vector of the given matrix to complete an epoch of the first iterative process, and perform the first iterative process for a prespecified number of epochs, wherein the first iterative process is deemed complete following completion of the prespecified number of epochs.
16. The computer program product of claim 11, further comprising program instructions to perform a second iterative process on the second estimated inverse matrix read out from the array of RPU cells to thereby generate a third estimated inverse matrix that has an accuracy which is greater than an accuracy of the second estimated inverse matrix, wherein the second iterative process comprises an optimization process that is performed in a digital domain, and wherein the third estimated inverse matrix comprises the inverse matrix which is returned to the application.
17. The computer program product of claim 16, wherein the second iterative process comprise a Newton optimization process.
18. A method comprising:
storing an estimated inverse matrix of a given matrix in an array of resistive processing unit (RPU) cells, wherein the RPU cells comprise conductance values that encode an array of matrix values of the estimated inverse matrix that is stored in the array of RPU cells;
performing a first operation on the array of RPU cells, wherein the first operation comprises:
applying a row vector, which represents a row of the given matrix, to inputs of first control lines that extend in a first direction across the array of RPU cells; and
performing a vector matrix multiplication operation on the array of RPU cells to generate an output vector on outputs of second control lines that extend in a second direction across the array of RPU cells, wherein the output vector represents a multiplication of the row vector and the array of matrix values of the estimated inverse matrix that is stored in the array of RPU cells; and
performing a second operation on the array of RPU cells, wherein the second operation comprises:
applying a scaled row vector to the inputs of the first control lines, wherein the scaled row vector comprises the row vector scaled by a learning rate parameter;
applying an error vector to inputs of the second control lines;
performing an outer product operation on the array of RPU cells to update at least some of the matrix values of the of the estimated approximate inverse matrix that is stored in the array of RPU cells, wherein the outer product operation comprises performing an outer product of the scaled row vector and the error vector.
19. The method of claim 18, wherein the error vector represents a difference between the output vector generated by the first operation and a corresponding row vector of an identity matrix associated with the given matrix.
20. The method of claim 18, wherein the first and second operations are performed as part of an iterative stochastic gradient descent process which comprises utilizing the matrix values in row vectors of the given matrix as training data to train the approximate inverse matrix stored in the array of RPU cells and, thereby, converge the approximate inverse matrix to an increasingly accurate approximation of an actual inverse matrix of the given matrix.
21. A device, comprising:
an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, and second control lines extending in a second direction across the array of RPU cells, wherein each RPU cell is connected at an intersection of one of the first control lines and one of the second control lines, wherein each RPU cell comprises a resistive device with a tunable conductance, wherein conductance values of a least a portion of the resistive devices of the RPU cells in the array of RPU cells encode matrix values of an estimated inverse matrix that is stored in the array of RPU cells, wherein the estimated inverse matrix stored in the array of RPU cells represents an estimate of an inverse matrix of a given matrix; and
control circuitry operatively coupled to the array of RPU cells to cause performance of a first operation and a second operation on the array of RPU cells;
wherein the first operation comprises the control circuitry applying a row vector, which represents a row of the given matrix, to inputs of the first control lines to perform a vector matrix operation which comprises multiplying the input row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector on outputs of the second control lines; and
wherein the second operation comprises the control circuitry applying an error vector to inputs of the second control lines and inputting the row vector to the inputs of the first control lines, to perform an outer product operation of the error vector and the row vector on the array of RPU calls to thereby update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
22. The device of claim 23, wherein the error vector represents a difference between the output vector and a corresponding row vector of an identity matrix corresponding to the given matrix.
23. A computing system, comprising:
a digital processing system comprising one or more processors and memory to store program instructions that are executed by the one or more processors to perform a matrix inversion process to compute an inverse matrix of a given matrix; and
a neuromorphic computing system, coupled to the digital processing system, the neuromorphic computing system comprising at least one neural core, wherein the at least one neural core comprises an array of resistive processing unit (RPU) cells, first control lines extending in a first direction across the array of RPU cells, second control lines extending in a second direction across the array of RPU cells, and peripheral circuitry coupled to the first control lines and to the second control lines, wherein each RPU cell is connected at an intersection of one of the first control lines and one of the second control lines, wherein each RPU cell comprises a resistive device with a tunable conductance, wherein conductance values of a least a portion of the resistive devices of the RPU cells in the array of RPU cells encode matrix values of an estimated inverse matrix that is stored in the array of RPU cells, wherein the estimated inverse matrix stored in the array of RPU cells represents an estimate of the inverse matrix of the given matrix;
wherein the digital processing system controls the peripheral circuitry of the at least one neural core to perform first and second operations of the matrix inversion process on the array of RPU cells;
wherein the first operation comprises the digital processing system applying a row vector, which represents a row of the given matrix, to inputs of the first control lines to perform a vector matrix operation which comprises multiplying the row vector with the estimated inverse matrix stored in the array of RPU cells to generate an output vector on outputs of the second control lines; and
wherein the second operation comprises the digital processing system applying an error vector to inputs of the second control lines and inputting the row vector to the inputs of the first control lines, to perform an outer product operation of the error vector and the row vector on the array of RPU calls to thereby update at least some matrix values of the estimated inverse matrix that is stored in the array of RPU cells.
24. The computing system of claim 7, wherein digital processing system performs an iterative process on the estimated inverse matrix stored in the array of RPU cells, wherein the iterative process repeats the first and second operations for each row of the given matrix, and wherein the iterative process comprises a stochastic gradient descent optimization process.
25. The computing system of claim 8, wherein the digital processing system reads out the estimated inverse matrix stored in the array of RPU cells and performs a second iterative process on the estimated inverse matrix read out from the array of RPU cells to increase an accuracy of the estimated inverse matrix of the given matrix, wherein the second iterative process comprise a Newton optimization process.
US17/134,814 2020-12-28 2020-12-28 Matrix inversion using analog resistive crossbar array hardware Pending US20220207376A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/134,814 US20220207376A1 (en) 2020-12-28 2020-12-28 Matrix inversion using analog resistive crossbar array hardware

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US17/134,814 US20220207376A1 (en) 2020-12-28 2020-12-28 Matrix inversion using analog resistive crossbar array hardware

Publications (1)

Publication Number Publication Date
US20220207376A1 true US20220207376A1 (en) 2022-06-30

Family

ID=82119276

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/134,814 Pending US20220207376A1 (en) 2020-12-28 2020-12-28 Matrix inversion using analog resistive crossbar array hardware

Country Status (1)

Country Link
US (1) US20220207376A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213884A1 (en) * 2014-01-30 2015-07-30 University Of Dayton Partitioned resistive memory array

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150213884A1 (en) * 2014-01-30 2015-07-30 University Of Dayton Partitioned resistive memory array

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Han, R., Huang, P., Zhao, Y. et al. Efficient evaluation model including interconnect resistance effect for large scale RRAM crossbar array matrix computing. Sci. China Inf. Sci. 62, 22401 (2019). https://doi.org/10.1007/s11432-018-9555-8 (Year: 2018) *
Kun Wang, Li Li, Feng Han, Fan Feng, Jun Lin, Design and implementation of high performance matrix inversion based on reconfigurable processor, IEICE Electronics Express, 2016, Volume 13, Issue 15, Pages 20160579, https://doi.org/10.1587/elex.13.20160579 (Year: 2016) *
Sadeghi, Amir. (2015). A Stable Coupled Newton’s Iteration for the Matrix Inverse p th Root. Mathematical Modelling & Computations. 5. 69-79 (Year: 2015) *
Wei-Sheng Chin, Yong Zhuang, Yu-Chin Juan, and Chih-Jen Lin. 2015. A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems. ACM Trans. Intell. Syst. Technol. 6, 1, Article 2 (April 2015), 24 pages. https://doi.org/10.1145/2668133 (Year: 2015) *

Similar Documents

Publication Publication Date Title
US10909444B2 (en) Floating gate for neural network inference
US11042715B2 (en) Electronic system for performing a multiplication of a matrix and vector
WO2022233235A1 (en) Resistive memory device with magnetic layer having topological spin textures for tuning
JP2022547395A (en) Performing dot-product operations using memristor-like crossbar arrays
US20230097217A1 (en) Learning static bound management parameters for analog resistive processing unit system
US20220391681A1 (en) Extraction of weight values in resistive processing unit array
US20230099608A1 (en) Training convolution neural network on analog resistive processing unit system
US20220366005A1 (en) Hardware acceleration for computing eigenpairs of a matrix
US20230080480A1 (en) Pipeline parallel computing using extended memory
WO2022121569A1 (en) Worst case noise and bound management for rpu crossbar arrays
US20220207376A1 (en) Matrix inversion using analog resistive crossbar array hardware
CN116267023A (en) Gating unit for gating recurrent neural network
US11907715B2 (en) Hardware acceleration with preconditioners
US20220115067A1 (en) Distributing device array currents across segment mirrors
US20230083270A1 (en) Mixed signal circuitry for bitwise multiplication with different accuracies
US20230195832A1 (en) Calibration of matrix-vector operations on resistive processing unit hardware
US11461645B2 (en) Initialization of memory networks
US20230306252A1 (en) Calibrating analog resistive processing unit system
US11574196B2 (en) Dynamic management of weight update bit length
US11520855B2 (en) Matrix sketching using analog crossbar architectures
US20230306251A1 (en) Hardware implementation of activation functions
US11366876B2 (en) Eigenvalue decomposition with stochastic optimization
JP2024517688A (en) Hardware acceleration for matrix eigenvalue computation
US20230058641A1 (en) Analog-to-digital conversion with bit skipping functionality
US11811416B2 (en) Energy-efficient analog-to-digital conversion in mixed signal circuitry

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GOKMEN, TAYFUN;ONEN, OGUZHAN MURAT;WU, CHAI WAH;AND OTHERS;SIGNING DATES FROM 20201222 TO 20201227;REEL/FRAME:054753/0611

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER