CN114898792A - Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method - Google Patents

Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method Download PDF

Info

Publication number
CN114898792A
CN114898792A CN202210390722.7A CN202210390722A CN114898792A CN 114898792 A CN114898792 A CN 114898792A CN 202210390722 A CN202210390722 A CN 202210390722A CN 114898792 A CN114898792 A CN 114898792A
Authority
CN
China
Prior art keywords
vector
bit
exclusive
unit
1fefet1r
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210390722.7A
Other languages
Chinese (zh)
Inventor
尹勋钊
刘哲恺
陈豪邦
卓成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202210390722.7A priority Critical patent/CN114898792A/en
Publication of CN114898792A publication Critical patent/CN114898792A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/04Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS
    • G11C16/0483Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS comprising cells having several storage transistors connected in series
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/57Arithmetic logic units [ALU], i.e. arrangements or devices for performing two or more of the operations covered by groups G06F7/483 – G06F7/556 or for performing logical operations
    • G06F7/575Basic arithmetic logic units, i.e. devices selectable to perform either addition, subtraction or one of several logical operations, using, at least partially, the same circuitry
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Static Random-Access Memory (AREA)

Abstract

The multi-bit memory inner product and exclusive-or unit, the exclusive-or vector and the operation method comprise N1 FeFET1R structures connected in parallel, an input transistor, a first inverter and a second inverter, wherein N is a natural number larger than 1, the 1FeFET1R structure comprises FeFETs and resistors which are electrically connected, the resistor of each 1FeFET1R structure is electrically connected with the input transistor, the gate of the input transistor is electrically connected with the gate of the FeFET in one 1FeFET1R structure through the first inverter, and the gate of the FeFET in the 1FeFET1R structure is electrically connected with the gate of the FeFET in the other 1FeFET1R structure through the second inverter. The invention provides a unit and a vector thereof based on a nonvolatile memory device and simultaneously supporting inner product and exclusive OR of a multi-bit memory for the first time, and the unit and the vector thereof have better performance on three indexes of search energy consumption, search delay and area.

Description

Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method
Technical Field
The invention relates to the field of storage, calculation and circuits, in particular to a multi-bit memory inner product and exclusive OR unit, an exclusive OR vector and an operation method.
Background
In the context of artificial intelligence and data intensive Computing, various Binary Neural Networks (BNNs) and ultra-high dimensional vector Computing (HDC) have been shown to be efficiently applied to different practical scenarios such as: object tracking, voice recognition, image clustering, and the like. Because the separation of a traditional von-neumann computer architecture computing unit and a storage unit can cause high delay and energy consumption, a research hotspot is formed by replacing a traditional von-neumann computer rack with a storage-computation integrated architecture; the integral storage and calculation unit composed of various novel nonvolatile devices can realize different logic operations, for example, the logical operation of AND between binary vectors can be realized by a single ferroelectric transistor.
However, in real application, the binary vector cannot satisfy the operation scene with intensive data; the operation unit of the multi-bit inner product can be more widely applied to artificial intelligence scenes such as a convolutional neural network. The multi-bit memory multi-bit inner product unit based on the traditional SRAM is widely proposed in recent years, but the multi-bit memory inner product unit based on the traditional SRAM still has a plurality of defects in time delay, energy consumption, area, expandability and the like, and a multi-bit memory inner product and exclusive-or architecture based on a novel nonvolatile memory device is not proposed; meanwhile, on the basis of realizing the multi-bit inner product, an actual scene such as a binary convolutional neural network still needs to realize the exclusive or function, and for example, the Hamming code distance is bitwise exclusive or operation, so that the invention provides a storage and calculation unit which is simultaneously suitable for the multi-bit vector inner product and the exclusive or function.
Disclosure of Invention
The invention aims to provide a multi-bit memory inner product and exclusive OR unit and a technical scheme of an exclusive OR vector thereof, firstly provides an implementation mode based on a FeFET, and improves indexes such as energy consumption, search delay, area and the like compared with the existing only work.
In order to achieve the purpose, the invention provides the following scheme:
a multi-bit memory inner product and exclusive-OR unit comprises N1 FeFET1R structures connected in parallel, an input transistor, a first inverter and a second inverter, wherein N is a natural number larger than 1, the 1FeFET1R structure comprises FeFETs and resistors which are electrically connected, the resistor of each 1FeFET1R structure is electrically connected with the input transistor, the gate of the input transistor is electrically connected with the gate of the FeFET in one 1FeFET1R structure through the first inverter, and the gate of the FeFET in the 1FeFET1R structure is electrically connected with the gate of the FeFET in the other 1FeFET1R structure through the second inverter.
Further, the resistance of the resistor in each 1FeFET1R structure is different, forming a series of output currents as a series of binary 2 N-1 ,2 N-2 ,…,2 1 ,2 0 And a memory unit.
Further, the 1FeFET1R structure has a resistor electrically connected to the drain or source of the FeFET.
Further, the input transistor operates in a linear region, which maps the weight of the vector elements to a voltage and inputs to the gate of the corresponding FeFET.
Further, the first inverter is used to input complementary values of vector elements.
Further, the second inverter is used for storing complementary values of two corresponding FeFETs.
The invention also provides a multi-bit internal product and exclusive or vector, which comprises M multi-bit internal product and exclusive or units, wherein the M multi-bit internal product and exclusive or units are connected in parallel.
The present invention further provides an operation method of the multi-bit memory inner product and xor vector, including:
s1, each vector element of the stored vector is stored into the multi-bit memory inner product and exclusive OR unit, and the specific storage method is as follows: each vector element of the stored vector is binary, and according to the binary value of the vector element to be input, if the binary value is '1', a high voltage is input to the corresponding FeFET grid electrode, so that the FeFET is stored in '1'; if '0', a low voltage is input to the gate of the corresponding FeFET, so that the FeFET is stored in '0', and simultaneously, the low voltage is stored in another exclusive-OR 1FeFET1R structure through an inverter
Figure BDA0003596826120000021
S2 storing the vector elements of the vector into the multi-bit memory inner product and xor unit, when the vector is queried, the following operations are performed:
s2.1, applying the vector elements of the query vector to the grid of an input transistor in the multi-bit memory inner product and exclusive OR unit in a voltage mode; meanwhile, the vector elements of the query vector correspond to the 1FeFET1R structure through the first inverter;
s2.2, for realizing the multi-bit inner product function, the grid of each FeFET simultaneously inputs high voltage, the characteristic that the FeFETs can realize AND is utilized, and when the stored value is '0', the output is '0'; when the stored value is '1', the output is '1';
s2.3, for realizing the multi-bit function, the two inverters are in a turn-off state; for realizing the exclusive-or function, the two inverters are connected with a power supply and are in a working state, and low voltage is input to the grid electrodes of the first N-1 FeFETs at the same time, namely the low voltage is stored into '0'.
The invention has the following beneficial effects:
the invention provides a unit and a vector thereof based on a nonvolatile memory device and simultaneously supporting inner product and exclusive OR of a multi-bit memory for the first time, and the unit and the vector thereof have better performance on three indexes of search energy consumption, search delay and area.
Drawings
FIG. 1 is a schematic diagram of an N-4-bit multi-bit inner product and XOR unit applied to a cosine search architecture;
FIG. 2 is a circuit diagram of a single N-4-bit multi-bit memory inner product and XOR unit;
fig. 3(a) is a diagram illustrating the result of 0000 to 1111 of the stored value of the single multi-bit in-memory xor unit when N is 4 bits;
FIG. 3(b) is a schematic diagram of the result of a single multi-bit in-memory inner product and XOR unit storing values from 0000 to 1111 through 100 Monte Carlo cycles under N-4 bits;
fig. 4(a) and (b) are respectively an N-4/N-6 extended schematic diagram, in which the scalability of the multi-bit inner product and exclusive-or unit is analyzed, and fig. 4(b) shows that even when N-6 is reached, only one-bit operation cannot be distinguished in the worst case;
fig. 5 is a diagram illustrating the result of the decrease of the resistance value in the single multi-bit in-memory product and exclusive-or unit when N is 4 bits;
FIG. 6 is a schematic diagram of the application of the multi-bit memory inner product and XOR unit shown in FIG. 1.
Detailed Description
The invention is described in further detail below with reference to the figures and specific examples.
Referring to fig. 1-6, a multi-bit memory inner product and exclusive-or unit includes N1 FeFET1R structures 1 connected in parallel, an input transistor 2, a first inverter 3, and a second inverter 4, where N is a natural number greater than 1, the 1FeFET1R structure 1 includes fefets 100 and resistors 101, the resistors 101 are electrically connected to drains or sources of the fefets 100, the resistors 101 of each 1FeFET1R structure 1 are electrically connected to the input transistor 2, a gate of the input transistor 2 is electrically connected to a gate of the FeFET100 in one 1FeFET1R structure 1 through the first inverter 3, and a gate of the FeFET100 in the 1FeFET1R structure 1 is electrically connected to a gate of the FeFET100 in the other 1FeFET1R structure 1 through the second fet 4.
For forming an inner product unit with N +1 bits to work in a bit memory inner product mode, only a 1FeFET1R structure 1 needs to be added to an N bit structure, and a resistor 101 of a 1FeFET1R structure 1 needs to have 2 N Multiple or 2 -1 Multiple saturated drain-source current. Therefore, the resistance of the resistor 101 in each 1FeFET1R structure 1 is different, and a series of output currents are formed as a series of binary 2 N-1 ,2 N-2 ,…,2 1 ,2 0 And a memory unit.
Wherein the input transistor 2 operates in a linear region, which maps the weight of the vector elements to a voltage and inputs it to the gate of the corresponding FeFET 100.
Wherein the first inverter 3 is used for inputting a complementary value.
Wherein the second inverter 4 stores complementary values for two corresponding fefets 100.
Referring to fig. 2, the present invention further provides a multi-bit in-memory product and xor vector, which includes M multi-bit in-memory product and xor units C as described above, wherein the M multi-bit in-memory product and xor units C are connected in parallel to form a vector having M vector elements.
A method for operating the multi-bit in-memory product and xor vector as described above, comprising:
s1, each vector element of the stored vector is stored into the multi-bit memory inner product and exclusive OR unit, and the specific storage method is as follows: each vector element stored in the vector is binary, for example, an in-memory product unit of N-4 bits, W-W 3 w 2 w 1 w 0 High position is w 3 Is shown by 2 3 (ii) a Low position is w 0 Is shown by 2 0 . Inputting high voltage to the corresponding FeFET gate according to the binary value of the vector element to be input, and if the binary value is '1', storing the FeFET into '1'; if '0', a low voltage is input to the gate of the corresponding FeFET, so that the FeFET is stored in '0'. At the same time, another exclusive-or 1FeFET1R structure is stored in through an inverter
Figure BDA0003596826120000031
S2 storing the vector elements of the vector into the multi-bit memory inner product and xor unit, when the vector is queried, the following operations are performed:
s2.1, applying the vector elements of the query vector to the grid of an input transistor in the multi-bit memory inner product and exclusive OR unit in a voltage mode; meanwhile, the vector elements of the query vector correspond to the 1FeFET1R structure through the first inverter.
S2.2, for realizing the multi-bit inner product function, the grid of each FeFET simultaneously inputs high voltage, the characteristic that the FeFETs can realize AND is utilized, and when the stored value is '0', the output is '0'; when the stored value is '1', the output is '1'.
S2.3, for realizing the multi-bit function, the two inverters are in a turn-off state; for the exclusive-OR function, the two inverters are connected to the power supply and are in working state, and the gates of the first N-1 (i.e. V3 to V1 in FIG. 2) FeFETs simultaneously input low voltage, i.e. store into '0', so that only the rightmost two 1 FeFETs 1R of the cell work.
Unit application and architecture simulation operation flow description
Multiple bit memoryThe vector composed of the product and exclusive or unit calculates the cosine calculating circuit input; as shown in fig. 1, taking N-4 bits as an example, the transistors of each memory cell in the memory array are connected to form a vector containing M vector elements. This inner product result is copied by the current mirror as an input to the cosine calculation circuit. While the memory array on the right of FIG. 1 is used to calculate L for each cosine value 2 Norm, i.e. denominator of the cosine expression; and the output of the cosine calculation circuit is processed by a Winner-Take-All circuit to find out the storage vector with the maximum cosine distance with the query vector. The expression of the cosine calculating circuit is as follows:
Figure BDA0003596826120000041
Figure BDA0003596826120000042
the specific operation process of the multi-bit memory inner product and XOR unit is as follows
1. Before searching begins, inputting a storage vector to each multi-bit memory inner product and exclusive OR unit; v [3] by each multibit memory inner product and XOR unit, for example, 4 bits N]~V[0]Separately writing w 3 、w 2 、w 1 、w 0 (ii) a While another 1FeFET1R is stored through an inverter
Figure BDA0003596826120000043
'0' is written with a-4V voltage pulse and '1' is written with a +4V voltage pulse. After writing the vector elements, the search process can begin.
2.1, during searching, when the unit works in a multi-bit memory inner product mode, each 1FeFET1R for realizing the multi-bit memory inner product, namely V3-V0 (figure 2), is written by a +4V voltage pulse, namely '1'; at the same time, the input is input by the gate of the input transistor (fig. 2). Selecting a voltage between 0V and 1.2V according to the magnitude of the input vector element value.
2.2, during the search, when the cell is operating in XOR mode, the first N-1 bits of 1FeFET1R, i.e., V3-V1 (FIG. 2), which implements the inner product for multiple bits, are written with a-4V voltage pulse, i.e., a '0' is written.
The function and effect of the invention are further illustrated and shown by the following simulation experiments:
1. simulation conditions
Experiments a memory array consisting of 1FeFET1R memory cells was simulated using a physical circuit-based compatible spectrum and SPICE model, where FeFET is based on the preiach model. The model realizes efficient design and analysis, and is widely applied to FeFET circuit design. PTM45-HP was used as a simulation model for the remaining transistors.
The simulation architecture is shown in FIG. 1. FIG. 1 implements one application of an artificial intelligence scenario: a nearest neighbor search based on a cosine search. The principle is to find the closest stored vector to the input vector in cosine distance. The memory cell (denoted by C) in FIG. 1 is a multi-bit memory inner product and XOR unit according to the present invention; fig. 2 shows a multi-bit inner product and xor unit represented by N-4 bits.
2. Simulation result
(1) According to the schematic diagram of the multi-bit memory inner product and XOR unit of FIG. 2, when the current is at the nano-level, the simulation of SPECTRE shows that R is equal to R 0 :R 1 :R 2 :R 3 The ratio is 8:4:2: 1.
(2) The abscissa of fig. 3(a) represents the voltage input to the gate of the transistor in the multi-bit bank and xor unit, and the voltage is a continuous value. The curve is from 0000 to 1111 from bottom to top; fig. 3(b) shows the result obtained by considering FeFET process errors (extracted from non-patent document 1T. solimetric et al, "Ultra-lowpower flexibleprecision FeFET base analyzed analog in-memory computing", IEEE iedm,2020.), large resistance errors (extracted from non-patent document 2d. saito et al, "analog in-memory computing FeFET-based 1T1 raray for edgeaiapplications", IEEE symposium vlsi-ics, 2021) and transistor errors, i.e., domain default 10% magnitude error, 10% threshold voltage error. The abscissa of FIG. 3(a) is the voltage input to the gate of the transistor in the inner product and XOR cell of the multi-bit memory, and the curve from bottom to top is the stored value 0001 (1) (10) )、0011(3 (10) )、0101(5 (10) )、0111(7 (10) )、1001(9 (10) )、1011(11 (10) )、1101(13 (10) )、1111(15 (10) ). The result shows that the inner product result of the invention has high accuracy, the difference of the inner product result is more than 2, and the inner product result range is large enough (the input is more than 0101 (5) (10) ) And the voltage is about 0.5V), no overlap occurs in the operation.
(3) Energy consumption and time delay are as follows:
comparing our results with those based on SRAM multibit inner-product-XOR cell proposed In non-patent document 3(M.Ali et al, "IMAC: In-Memory Multi-Bit Multi and ACcumulation In 6T SRAM Array", TCAS-I,2020.), the present invention obtains results of expanding vector number and vector dimension from FIG. 6, respectively, and the present invention obtained results of more than 10 4 The reduction of the power consumption of the inner product and exclusive-or unit in each multi-bit memory, and the reduction of 4.67 times of output delay.
(4) Consumption area:
the area consumption of the present invention is significantly reduced compared to the above-mentioned non-patent document 3 mainly because a new nonvolatile memory device FeFET is utilized and is simpler in design than the conventional SRAM. For a single multi-bit memory inner product and exclusive or unit, the area of the present invention is reduced by 488 times (SRAM 64.9 μm) compared to the above non-patent document 3 2 Cell, 0.133 μm according to the invention 2 /cell)。
(5) And (3) expandability:
fig. 4 extends the N-4-bit multi-bit in-memory product and xor unit to N-6-bit, which shows that in the worst case, only 1 is inaccurate; specifically, for the cell with N ═ 6, the simulation results indicate 000111 (7) (10) ) And 001000 (8) (10) ) The same current situation will occur.
FIG. 5 shows that the current difference flowing out from each branch of the multi-bit memory inner product and XOR unit is increased by reducing the resistance of 1FeFET1R and increasing the off-working current; it is shown that the scalability of the present invention is further increased in applications where it is not necessary to limit the current magnitude, such as hamming calculations.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the claims.

Claims (8)

1. The multi-bit memory inner product and exclusive-OR unit is characterized by comprising N1 FeFET1R structures connected in parallel, an input transistor, a first inverter and a second inverter, wherein N is a natural number greater than 1, the 1FeFET1R structure comprises FeFETs and resistors which are electrically connected, the resistor of each 1FeFET1R structure is electrically connected with the input transistor, the gate of the input transistor is electrically connected with the gate of the FeFET in one 1FeFET1R structure through the first inverter, and the gate of the FeFET in the 1FeFET1R structure is electrically connected with the gate of the FeFET in the other 1FeFET1R structure through the second inverter.
2. The multiple bit memory inner product and exclusive-or unit of claim 1, wherein the resistors in each 1FeFET1R structure have different resistance values to form a series of output currents as a series of binary 2 N-1 ,2 N-2 ,…,2 1 ,2 0 And a memory unit.
3. The cell of claim 1, wherein the 1FeFET1R has a resistor electrically connected to the drain or source of the FeFET.
4. The multiple-bit memory inner product and exclusive-or unit of claim 1, wherein the input transistor operates in a linear region, which maps the weight of the vector element to a voltage and inputs the voltage to the gate of the corresponding FeFET.
5. A multi-bit in-memory product and xor unit as claimed in claim 1, wherein the first inverter is configured to input a complementary value of a vector element.
6. The multiple-bit memory inner product and exclusive-or unit of claim 1, wherein the second inverter is for storing complementary values of two corresponding fefets.
7. A multi-bit in-memory product-and-xor vector comprising M multi-bit in-memory product-and-xor units according to any one of claims 1 to 6, the M multi-bit in-memory product-and-xor units being connected in parallel.
8. A method as claimed in claim 7, comprising:
s1 stores each vector element of the stored vector into the multi-bit memory inner product and xor unit, and the specific storage method is: each vector element of the stored vector is binary, and according to the binary value of the vector element to be input, if the binary value is '1', a high voltage is input to the corresponding FeFET grid electrode, so that the FeFET is stored in '1'; if '0', a low voltage is input to the gate of the corresponding FeFET, so that the FeFET is stored in '0', and simultaneously, the low voltage is stored in another exclusive-OR 1FeFET1R structure through an inverter
Figure FDA0003596826110000011
S2 storing the vector elements of the vector into the multi-bit memory inner product and xor unit, when the vector is queried, the following operations are performed:
s2.1, applying the vector elements of the query vector to the grid of an input transistor in the multi-bit memory inner product and exclusive OR unit in a voltage mode; meanwhile, the vector elements of the query vector correspond to the 1FeFET1R structure through the first inverter;
s2.2, for realizing the multi-bit inner product function, the grid of each FeFET simultaneously inputs high voltage, the characteristic that the FeFETs can realize AND is utilized, and when the stored value is '0', the output is '0'; when the stored value is '1', the output is '1';
s2.3, for realizing the multi-bit function, two inverters are in a turn-off state; for realizing the exclusive-or function, the two inverters are connected with a power supply and are in a working state, and low voltage is input to the grid electrodes of the first N-1 FeFETs at the same time, namely the low voltage is stored into '0'.
CN202210390722.7A 2022-04-14 2022-04-14 Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method Pending CN114898792A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210390722.7A CN114898792A (en) 2022-04-14 2022-04-14 Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210390722.7A CN114898792A (en) 2022-04-14 2022-04-14 Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method

Publications (1)

Publication Number Publication Date
CN114898792A true CN114898792A (en) 2022-08-12

Family

ID=82718368

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210390722.7A Pending CN114898792A (en) 2022-04-14 2022-04-14 Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method

Country Status (1)

Country Link
CN (1) CN114898792A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863936A (en) * 2023-09-04 2023-10-10 之江实验室 Voice recognition method based on FeFET (field effect transistor) memory integrated array

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116863936A (en) * 2023-09-04 2023-10-10 之江实验室 Voice recognition method based on FeFET (field effect transistor) memory integrated array
CN116863936B (en) * 2023-09-04 2023-12-19 之江实验室 Voice recognition method based on FeFET (field effect transistor) memory integrated array

Similar Documents

Publication Publication Date Title
Chen et al. Design and optimization of FeFET-based crossbars for binary convolution neural networks
CN108449080B (en) Full-adding circuit formed based on CMOS inverter and memristor
US9697877B2 (en) Compute memory
Chen et al. Technology-design co-optimization of resistive cross-point array for accelerating learning algorithms on chip
Liu et al. A carry lookahead adder based on hybrid CMOS-memristor logic circuit
Hung et al. Challenges and trends of nonvolatile in-memory-computation circuits for AI edge devices
CN111126579B (en) In-memory computing device suitable for binary convolutional neural network computation
Imani et al. NVQuery: Efficient query processing in nonvolatile memory
Imani et al. Efficient query processing in crossbar memory
CN110569962A (en) Convolution calculation accelerator based on 1T1R memory array and operation method thereof
CN113782072B (en) Multi-bit memory computing circuit
Thomann et al. All-in-memory brain-inspired computing using fefet synapses
Liu et al. Sme: Reram-based sparse-multiplication-engine to squeeze-out bit sparsity of neural network
CN114898792A (en) Multi-bit memory inner product and exclusive-or unit, exclusive-or vector and operation method
Laguna et al. Hardware-software co-design of an in-memory transformer network accelerator
CN113936717B (en) Storage and calculation integrated circuit for multiplexing weight
Sahay et al. A 2T-1R cell array with high dynamic range for mismatch-robust and efficient neurocomputing
Kang et al. Deep in-memory architectures for machine learning
Zhang et al. Few-shot graph learning with robust and energy-efficient memory-augmented graph neural network (MAGNN) based on homogeneous computing-in-memory
Scott et al. A flash-based current-mode IC to realize quantized neural networks
Chen et al. A 1T2R1C ReRAM CIM accelerator with energy-efficient voltage division and capacitive coupling for CNN acceleration in AI edge applications
CN117079688A (en) Current domain 8TSRAM unit and dynamic self-adaptive quantized memory circuit
Reis et al. In-memory computing accelerators for emerging learning paradigms
Yantir Efficient acceleration of computation using associative in-memory processing
Zang et al. 282-to-607 TOPS/W, 7T-SRAM based CiM with reconfigurable column SAR ADC for neural network processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination