EP4235398A1 - Processeur hyperdimensionnel à signaux mixtes - Google Patents

Processeur hyperdimensionnel à signaux mixtes Download PDF

Info

Publication number
EP4235398A1
EP4235398A1 EP22382166.1A EP22382166A EP4235398A1 EP 4235398 A1 EP4235398 A1 EP 4235398A1 EP 22382166 A EP22382166 A EP 22382166A EP 4235398 A1 EP4235398 A1 EP 4235398A1
Authority
EP
European Patent Office
Prior art keywords
processing unit
array
row
input
bit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22382166.1A
Other languages
German (de)
English (en)
Inventor
Daniel García Lesta
Fernando Rafael PARDO SECO
Óscar Pereira Rial
Paula López Martínez
Diego Cabello Ferrer
Víctor Manuel BREA SÁNCHEZ
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Universidade de Santiago de Compostela
Original Assignee
Universidade de Santiago de Compostela
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Universidade de Santiago de Compostela filed Critical Universidade de Santiago de Compostela
Priority to EP22382166.1A priority Critical patent/EP4235398A1/fr
Priority to PCT/EP2023/054867 priority patent/WO2023161484A1/fr
Publication of EP4235398A1 publication Critical patent/EP4235398A1/fr
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/065Analogue means

Definitions

  • the invention relates to hardware systems for hyperdimensional computing (HDC) tasks, more precisely the invention relates to a multiprocessor mixed-signal hardware architecture to perform HDC tasks.
  • HDC hyperdimensional computing
  • Hyperdimensional computing is a brain-inspired computing paradigm built on vectors in the thousands of dimensions to represent data.
  • HDC has addressed classification problems, showing promising results compared to state-of-the-art deep learning algorithms.
  • HDC computing involves simple operations, which makes it suitable for low-power hardware implementations.
  • HDC is a brain-inspired computing paradigm in which low-dimensional data, such as, image features, signal values, data text etc., are mapped into a vector with e.g. thousands of components. In this way, the information contained in the low-dimensional data is spread along the components.
  • the mapping into a hyperdimensional vector (HD-vector or HDV) is made through a hyperdimensional encoder (HD encoder) that provides e.g. the holistic representation of the low-dimensional data, assigning orthogonal HDVs to dissimilar low-dimensional inputs.
  • A associative memory
  • HDC comprises three basic primitives, i.e., binding, bundling and permutation. Binding two binary HDVs, V 1 and V 2 generates a dissimilar HDV, i.e. an HDV orthogonal to V 1 and V 2 .
  • Binary HDV binding is carried out with a bit-wise XOR Boolean gate between two HDC vectors V 1 and V 2 :
  • V 1 0 1 0 1 0 0 ... 0 0
  • V 2 1 1 0 1 1 0 ... 1 0
  • V 1 ⁇ V 2 1 0 0 0 1 0 ... 1 0
  • the second basic operation is bundling of two or more HDVs, which is run during on-line training to generate a class vector representing the complete set of HDVs in the class.
  • the last operation is permutation, which rotates the components of the HDV by a given number of positions suitable for encoding several HDVs into a meaningful HDV for classification tasks.
  • the V am HDV with the minimum Hamming distance gives the class of V 1 HDV.
  • Binding and bundling include operations between the same components of the HDV, whereas permutation and Hamming distance calculation involve operations between different HDV components.
  • HDC hardware systems developed until now performs the operations of majority rule and Hamming distance calculation by means of digital circuitries, leading to consuming area and power designs as inter alia described in A. Rahimi, S. Datta, D. Kleyko, E. P. Frady, B. Olshausen, P. Kanerva, and J. M. Rabaey, "High-dimensional computing as a nanoscalable paradigm," IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 64, no. 9, pp. 2508-2521, 2017 , and M. Schmuck, L. Benini, and A. Rahimi, "Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory," J. Emerg. Technol. Comput. Syst., vol. 15, Oct. 2019 .
  • the present disclosure provides a mixed-signal architecture to implement an HDC hardware processor.
  • the disclosure tackles with the requirements of low power and area consumption of HDC processor for classifications or inference tasks.
  • a first aspect of the disclosure relates to a device comprising m ⁇ n 1-bit processing units (PUs), where each of m and n is a natural number equal to or greater than 2.
  • Each of the processing units (PUs) includes a memory unit, preferably an associative memory, and each PU is locally connected to its four nearest neighboring PUs through respective communication buses.
  • the nearest neighboring PUs are usually the PUs directly adjacent to the respective PU, at north, east, west and south from the respective PU.
  • each PU of the first row is locally connected to the PU of the same column in the last row (e.g. the PU at position 1 ⁇ 3 is connected to the PU at position m ⁇ 3; it is noted that here the first row is identified with index 1, but it could likewise be identified with index 0, and that the last row is identified with index m , but it could likewise be identified with index m - 1, the same applies to row IDs); each PU of the last column is locally connected to the PU in the next row and in the first column (e.g. the PU at position 2 ⁇ n is connected to the PU at position 3 ⁇ 1), and the last PU is connected to the first PU (i.e. the PU at m ⁇ n is connected to the PU at 1 ⁇ 1).
  • the device further comprises a plurality of multiplexers, particularly one multiplexer per column.
  • Each such multiplexer has its output port connected to a different PU of the last row, and has two input ports, one of which is connected to the PU of the first row sharing the column of the PU of the output, and one input terminal of the device; hence, the multiplexers are 1-bit multiplexers.
  • the array of PUs also referred to, hereinafter, as hyperprocessor or HDC processor.
  • These data can then be moved into the next row in the same clock cycle as a new stream of n bits is inserted into the bottom row.
  • the hyperprocessor will be loaded with the desired hypervector after m clock cycles at most.
  • the same strategy can be used to read out a stored hypervector from the first row.
  • rows and columns are used, these shall not be understood in a limiting manner. In this sense, columns may be horizontal and rows vertical. In fact, by rotating the array or device 90° clockwise or counterclockwise, the m ⁇ n array can be seen as becoming an n ⁇ m array. It will also be noted that terms such as first and last rows and columns are also subject to how the array or device is being inspected; a 180° rotation or a mirroring of the array or device will result in the same arrangement but with the first row being the last row, the first column being the last column and vice versa, An array or device with all these possible interpretations also fall within the scope of the present disclosure.
  • a memory unit i.e. local memory, in each PU implements the associative memory needed in HDC algorithms for inference tasks, which makes in-PU operations faster than its off-PU memory counterpart; in this sense, the memory of each PU reduces data flow as results from the processing need only to be stored in memory once and not twice (e.g. processor registers for a majority rule do not need to be stored again in a memory since the memory of the PU is an input for the majority rule; and/or profile vectors in processor registers can be loaded in parallel when computing Hamming distance with respect to a query vector) and, therefore, power consumption, and increases speed as each PU has its own memory.
  • the mixed-mode architecture simplifies the HDC operations of e.g. majority rule and Hamming distance calculation by means of analog circuitry, leading to less area than fully digital implementations.
  • the hyperprocessor of the device consists in m times n PUs distributed in an m ⁇ n array, where all of the PUs receive the same control signals, from e.g. a controlling circuitry or unit within the device, or from outside the device via one or more input terminals of the device, with Single Instruction Multiple Data (SIMD) processing.
  • SIMD Single Instruction Multiple Data
  • a possible application of the PU's 4-connectivity is the data input/output to the HDC processor through the plurality of multiplexers.
  • Every PU has local processing capabilities through a 1-bit logic unit (LU), which can be regarded as an arithmetic logic unit as it can perform 1-bit arithmetic operations, and it is connected to its four nearest neighbors as described above, following an x-y torus architecture.
  • LU 1-bit logic unit
  • This scheme allows to move data along both perpendicular axes, e.g. horizontal and vertical axes, which can be used to perform different operations involved in HDC.
  • one of the possible operations is circular shifts in both directions, widely used in common hyperdimensional computing algorithms, thanks to the connection of the last PU in each row with the first one in the next row.
  • connectivity in the y axis permits to implement random shifts faster than with only communication in the x axis.
  • Each PU's local memory stores the local components of the corresponding hypervectors.
  • the LU and the local memory allow to run any digital function, and therefore HDC operations, provided the correct sequence of control signals.
  • each PU includes all the circuitry required for common hyperdimensional computing operations.
  • each PU comprises one or more analogs circuits to perform majority rule and Hamming distance operations.
  • m times n is equal to or greater than 1000 (or 1024 if m and n are powers of two, which is preferable since that takes full advantage of the electronics of the device).
  • m times n is typically larger in HDC, as it can be in the order of thousands, tens of thousands or even larger, and the device of the present disclosure is configured to perform HDC with values of m times n of e.g. 4096 or greater, 8192, or even greater.
  • m times n should be equal to or greater than a dimensionality of an input hypervector. For instance, in some embodiments, m or n is 128 and the other one of m and n is 64.
  • the LU of each PU is implemented with a logic gate.
  • a logic gate For example, a NOR gate or a NAND gate, which provides functional completeness.
  • the logic gate preferably has latched inputs and tristate output connected to the PU's own communication bus, or to any of the communication buses of its nearest four neighbors.
  • the latches' inputs collect data from the bus either as they are or as an inverted version thereof, in the latter each logic gate comprises an inverter and a multiplexer.
  • the local memory is a SRAM memory.
  • Communication between PUs is carried out through the tri-state buffers driven by the logic gate, e.g. NOR gate, NAND gate. Each of them is connected to one neighbor's bus. Directions east and west are used for one position circular shifts by enabling their corresponding buffers. When that occurs, its selected neighbor bus will be loaded with the logic gate's output, which can be then written into a processor latch or to neighbor's local memory. All the information across the PUs is shared through a common 1-bit bus, connected to functional blocks like the logic unit, the majority rule circuit and an output block in current mode.
  • the memory unit of each PU comprises a SRAM memory.
  • Each SRAM's column and row may receive a control signal from e.g. a controlling circuitry or unit within the device, or from outside the device via one or more input terminals of the device, in a one-hot format, in which case use of PU-level decoders (if any) can be avoided, otherwise control signals may require decoders to convert encoded words into a plurality of signals for control.
  • a control signal from e.g. a controlling circuitry or unit within the device, or from outside the device via one or more input terminals of the device, in a one-hot format, in which case use of PU-level decoders (if any) can be avoided, otherwise control signals may require decoders to convert encoded words into a plurality of signals for control.
  • each PU is a 1-bit processor, only 1 bit can be written or read per clock cycle.
  • bits, columns or rows of the memory may serve to a double purpose.
  • the first one is to hold general information as the other bits of the memory, and the second one is to buffer the input to the majority rule block when provided.
  • each processing unit comprises an analog circuit for majority rule computation and an analog circuit for Hamming distance computation.
  • an analog circuit for majority rule computation simplifies the needed circuitry compared to a completely digital implementation.
  • the majority rule block writes the stored values from the memory units.
  • the majority rule block writes the stored values from the SRAM memory into a plurality of capacitors through transistors.
  • the capacitors preferably have a linear response or a response as linear as possible, in this sense the capacitors can be e.g. Metal-insulator-Metal (MiM); with non-linear response, the accuracy of e.g. the majority rule operations becomes lower and, thus, the result of the processing can become incorrect.
  • One common operation in hyperdimensional computing is Hamming distance between two hypervectors to find the closest profile to a query hypervector. Accordingly, the analog circuitry to this end also enables the performance of this operation in a simple manner.
  • One possible way would be to execute the XOR operation between each component of the hypervector at each PU and send the resultant hypervector to digital circuitry located within the device at the periphery of the hyperprocessor to count the number of ones.
  • the device is configured to receive one or more input hypervectors, process the hypervector(s) and output a result.
  • the device can be used for e.g. image and video classification, EGM signal processing, text recognition, etc.
  • the device is configured to receive and process one or more input hypervectors corresponding to a digital image or frame of a digital video, and output data indicative of one or more objects present in the digital image or frame.
  • a second aspect of the disclosure relates to a method for manufacturing a device for hyperdimensional computing, comprising: arranging a plurality of 1-bit processing units forming an m x n array, m and n each being a natural number equal to or greater than 2; arranging a plurality of n 1-bit multiplexers; connecting each processing unit to its four nearest neighboring processing units with the proviso of: one neighboring processing unit for each processing unit of the first row of the array being the processing unit of the last row sharing the same column of the array; one neighboring processing unit for each processing unit of the last column of the array is the processing unit of the first column and next row; and one neighboring processing unit for the processing unit at position m x n is the processing unit at position 1 ⁇ 1; connecting an output of each 1-bit multiplexer to a different processing unit of the first row of the array; and connecting a first input of each 1-bit multiplexer to the processing unit of the last row having a same column as the processing unit connected to the output, and a second input of each
  • the device manufactured is a device according to the first aspect of the disclosure.
  • a third aspect of the disclosure relates to a method of using a device according to the first aspect of the disclosure or a device manufactured with the method of the second aspect of the disclosure.
  • the method comprises: receiving, at the device, one or more hypervectors corresponding to a digital image or frame of a digital video; processing the one or more hypervectors at the device; and outputting, at the device, data indicative of one or more objects present in the digital image or frame.
  • the one or more hypervectors can be received through the one or more input terminals of the device.
  • Figure 1 shows a hyperprocessor of a device for hyperdimensional computing in accordance with embodiments.
  • the hyperprocessor includes a plurality of processing units 100 arranged in an m ⁇ n array or matrix. All processing units 100 are arranged on e.g. one or more integrated circuits. A position of each PU 100 within the array or matrix is shown inside the PU for illustrative purposes only.
  • Each PU 100 is connected to its four directly adjacent PUs 100 with data buses 104.
  • Adjacency can be regarded as the PUs at north, east, west and south of any given PU 100, where each of these coordinates is regarded in terms of the position of the PUs 100 in the array or matrix.
  • one of the four directly adjacent PUs 100 for the PUs 100 at the first row is the PU 100 of the same column but at the bottom row.
  • one of the four directly adjacent PUs 100 for the PUs 100 at the last column is the PU 100 in the first column but of the next row; in the case of the last PU 100, the next row is considered to be the first row.
  • the hyperprocessor also includes as many multiplexers 102 as columns in the array or matrix are. Each multiplexer 102 is connected with the PUs 100 of both the first and last rows and of the same column; the connection with the PU 100 of the first row is as an output 108 of the respective multiplexer 102, and the connection with the PU 100 of the last row is as an input 106 of the respective multiplexer 102. Additionally, each multiplexer 102 has as an input 106 directly or indirectly connected with an input terminal of the device so that it can receive input hypervectors. With the plurality of multiplexers 102, the hyperprocessor can receive n bits per clock cycle, and output n bits per clock cycle.
  • the hyperprocessor represented in Figure 1 could likewise be rotated or mirrored, thereby altering the visual layout of the elements shown therein but still maintaining the same arrangement between elements.
  • the hyperprocessor could be rotated 90° like in Figure 2 .
  • Figure 2 shows a hyperprocessor of a device for hyperdimensional computing in accordance with embodiments.
  • the hyperprocessor includes a plurality of processing units 100 arranged in an m ⁇ n array or matrix, with m horizontal columns arranged and n vertical rows.
  • m ⁇ n array or matrix with m horizontal columns arranged and n vertical rows.
  • the same hyperprocessor could be defined as having an n ⁇ m array or matrix, with n horizontal rows and m vertical columns. In that case, the number of multiplexers would be n and the connections between PUs 100 would be following the scheme illustrated in Figure 2 as well.
  • Figure 3 shows a processing unit, like one processing unit 100 of any one of Figures 1 and 2 , of a device in accordance with some embodiments.
  • the processing unit includes a memory unit 200, in this example a 16-bit SRAM memory although it will be readily apparent that other types and sizes of memories are possible without departing from the scope of the present disclosure.
  • the memory unit 200 is connected to both an analog circuit for majority rule computation 212 and own data bus 216.
  • the data bus 216 is connected to a logic unit 204, and a controlled current source 220.
  • the logic unit 204 includes a logic gate 202 in the form of a NOR gate, but a different gate could be used instead, like e.g. a NAND gate.
  • the logic gate 202 includes latched inputs 208, 210 and a tristate output connected to the data bus 216, or through outputs 214 (where N, E, W and S denote north, east, west and south) to any of the buses of the four adjacent PUs 100 as described, for example, with reference to Figure 1 .
  • the inputs 208, 210 can collect data from the bus either as they are or their inverted version with owing to an inverter 206 and a multiplexer 218.
  • the Hamming distance operation is provided by adding a current source 220 with an e.g. NMOS transistor biased with a current mirror to obtain a current proportional to the number of ones after the XOR operation is executed.
  • This current source enables transistors controlled by the control signal e I and by the BUS voltage, and it has its output connected to a column bus that gathers the current from all PUs of a column. These column buses are also connected between them, summing up all the currents from the array of PUs. Thus, after the XOR operation this current source is enabled and only those where the result was one will add its current.
  • Figure 4 shows a circuitry, like circuitry 212 of Figure 3 , to implement the majority rule operation in processing units of a device in accordance with embodiments.
  • the majority rule can be computed on a set of e.g. p hypervectors.
  • the value of p which is a natural number, is less than a size k of the memory unit, so p ⁇ k ; for instance, p is 8 when the size of the memory unit is 16 bits.
  • the value of p is equal to a number of capacitors 300.
  • the majority rule block writes stored values from the memory unit into the capacitors 300 through transistors 302, like e.g. NMOS transistors, using signal write_MA.
  • a node V A 304 is charged to precharge voltage v_precharge 306 to improve circuit's yield by loading a parasitic input capacitance of a first inverter with a selected voltage; the first inverter is the pair of e.g. NMOS and PMOS transistors controlled by the e_MA and e_MAB signals.
  • This voltage can be used also for biasing the output value, which is useful to decide the tie case outcome (same number of ones and zeros).
  • the first tri-state inverter is activated, setting the output to the inverted desired output.
  • This inverter is used as a comparator; the inverter may cope with the effect of non-ideal switches implemented with e.g. NMOS transistors.
  • the switches may shift down logic ' 1' representation from a high voltage, e.g. 1.8 V, to a lower voltage.
  • the thresholds may have to be set up to account for these variations.
  • a second tri-state inverter is preferably added with feedback loop to V A 304, which is turned on with the read MA signal after out signal is settled.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Logic Circuits (AREA)
  • Image Analysis (AREA)
EP22382166.1A 2022-02-25 2022-02-25 Processeur hyperdimensionnel à signaux mixtes Pending EP4235398A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22382166.1A EP4235398A1 (fr) 2022-02-25 2022-02-25 Processeur hyperdimensionnel à signaux mixtes
PCT/EP2023/054867 WO2023161484A1 (fr) 2022-02-25 2023-02-27 Processeur de signal mixte hyperdimensionnel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP22382166.1A EP4235398A1 (fr) 2022-02-25 2022-02-25 Processeur hyperdimensionnel à signaux mixtes

Publications (1)

Publication Number Publication Date
EP4235398A1 true EP4235398A1 (fr) 2023-08-30

Family

ID=80624109

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22382166.1A Pending EP4235398A1 (fr) 2022-02-25 2022-02-25 Processeur hyperdimensionnel à signaux mixtes

Country Status (2)

Country Link
EP (1) EP4235398A1 (fr)
WO (1) WO2023161484A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120326749A1 (en) * 2009-12-14 2012-12-27 Ecole Centrale De Lyon Interconnected array of logic cells reconfigurable with intersecting interconnection topology
US20150310311A1 (en) * 2012-12-04 2015-10-29 Institute Of Semiconductors, Chinese Academy Of Sciences Dynamically reconstructable multistage parallel single instruction multiple data array processing system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120326749A1 (en) * 2009-12-14 2012-12-27 Ecole Centrale De Lyon Interconnected array of logic cells reconfigurable with intersecting interconnection topology
US20150310311A1 (en) * 2012-12-04 2015-10-29 Institute Of Semiconductors, Chinese Academy Of Sciences Dynamically reconstructable multistage parallel single instruction multiple data array processing system

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A. RAHIMIS. DATTAD. KLEYKOE. P. FRADYB. OLSHAUSENP. KANERVAJ. M. RABAEY: "High-dimensional computing as a nanoscalable paradigm", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, vol. 64, no. 9, 2017, pages 2508 - 2521, XP011659729, DOI: 10.1109/TCSI.2017.2705051
EGGIMANN MANUEL ET AL: "A 5 [mu]W Standard Cell Memory-Based Configurable Hyperdimensional Computing Accelerator for Always-on Smart Sensing", IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I: REGULAR PAPERS, IEEE, US, vol. 68, no. 10, 3 August 2021 (2021-08-03), pages 4116 - 4128, XP011880028, ISSN: 1549-8328, [retrieved on 20210927], DOI: 10.1109/TCSI.2021.3100266 *
M. SCHMUCKL. BENINIA. RAHIMI: "Hardware optimizations of dense binary hyperdimensional computing: Rematerialization of hypervectors, binarized bundling, and combinational associative memory", J. EMERG. TECHNOL. COMPUT. SYST., vol. 15, October 2019 (2019-10-01)
MIKA LAIHOJUSSI H. POIKONENPENTTI KANERVAEERO LEHTONEN: "High-Dimensional Computing with Sparse Vectors", IEEE BIOMEDICAL CIRCUITS AND SYSTEMS CONFERENCE, 2015
P. KANERVA: "Hyperdimensional computing: An introduction to computing in distributed representation with high-dimensional random vectors", COGNITIVE COMPUTATION, vol. 1, no. 2, 2009, pages 139 - 159

Also Published As

Publication number Publication date
WO2023161484A1 (fr) 2023-08-31

Similar Documents

Publication Publication Date Title
Marinella et al. Multiscale co-design analysis of energy, latency, area, and accuracy of a ReRAM analog neural training accelerator
Yakopcic et al. Extremely parallel memristor crossbar architecture for convolutional neural network implementation
US11398275B2 (en) Memory computation circuit and method
Umesh et al. A survey of spintronic architectures for processing-in-memory and neural networks
US11568223B2 (en) Neural network circuit
US20180121790A1 (en) Neural Array Having Multiple Layers Stacked Therein For Deep Belief Network And Method For Operating Neural Array
US20210192325A1 (en) Kernel transformation techniques to reduce power consumption of binary input, binary weight in-memory convolutional neural network inference engine
Sim et al. Scalable stochastic-computing accelerator for convolutional neural networks
Welser et al. Future computing hardware for AI
JPS62139066A (ja) オンバンドramおよびアドレス発生装置を有する単一命令多重デ−タセルアレイ処理装置
JP7475080B2 (ja) 曖昧検索回路
He et al. Accelerating low bit-width deep convolution neural network in MRAM
Liu et al. Sme: Reram-based sparse-multiplication-engine to squeeze-out bit sparsity of neural network
Ahmed et al. Spindrop: Dropout-based bayesian binary neural networks with spintronic implementation
Cho et al. An on-chip learning neuromorphic autoencoder with current-mode transposable memory read and virtual lookup table
CN115461758A (zh) 训练神经网络的存储器装置
Angizi et al. Pisa: A binary-weight processing-in-sensor accelerator for edge image processing
Wu et al. ReRAM crossbar-based analog computing architecture for naive bayesian engine
Su et al. CIM-spin: A scalable CMOS annealing processor with digital in-memory spin operators and register spins for combinatorial optimization problems
Rai et al. Perspectives on emerging computation-in-memory paradigms
EP4235398A1 (fr) Processeur hyperdimensionnel à signaux mixtes
Herrmann et al. A dynamic associative processor for machine vision applications
US9978015B2 (en) Cortical processing with thermodynamic RAM
TWI771014B (zh) 記憶體電路及其操作方法
Luo et al. SpinCIM: Spin orbit torque memory for ternary neural networks based on the computing-in-memory architecture

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20240226

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR