CN112558922A - Four-transistor memory computing device based on separated word lines - Google Patents

Four-transistor memory computing device based on separated word lines Download PDF

Info

Publication number
CN112558922A
CN112558922A CN202110190886.0A CN202110190886A CN112558922A CN 112558922 A CN112558922 A CN 112558922A CN 202110190886 A CN202110190886 A CN 202110190886A CN 112558922 A CN112558922 A CN 112558922A
Authority
CN
China
Prior art keywords
transistor
word line
bit
bit line
gate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110190886.0A
Other languages
Chinese (zh)
Inventor
乔树山
黄茂森
尚德龙
周玉梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Institute Of Intelligent Technology Institute Of Microelectronics Chinese Academy Of Sciences
Original Assignee
Nanjing Institute Of Intelligent Technology Institute Of Microelectronics Chinese Academy Of Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Institute Of Intelligent Technology Institute Of Microelectronics Chinese Academy Of Sciences filed Critical Nanjing Institute Of Intelligent Technology Institute Of Microelectronics Chinese Academy Of Sciences
Priority to CN202110190886.0A priority Critical patent/CN112558922A/en
Publication of CN112558922A publication Critical patent/CN112558922A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • G06F7/544Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices for evaluating functions by calculation
    • G06F7/5443Sum of products
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Analysis (AREA)
  • Computational Mathematics (AREA)
  • Neurology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Semiconductor Memories (AREA)

Abstract

The invention provides a four-transistor memory computing device based on a separated word line, which comprises: m × n in-memory compute bit cells arranged in an array, the in-memory compute bit cells comprising: a transistor T1, a transistor T2, a transistor T3, and a transistor T4; the source of the transistor T1 and the source of the transistor T2 are both connected to a power supply, the drain of the transistor T1 is connected to the gate of the transistor T2, and the gate of the transistor T1 is connected to the drain of the transistor T2; the source of the transistor T3 is connected to the bit line BL, the drain of the transistor T3 is connected to the gate of the transistor T1, and the gate of the transistor T3 is connected to the word line WLL; the source of the transistor T4 is connected to the bit line BLB, the drain of the transistor T4 is connected to the gate of the transistor T2, and the gate of the transistor T4 is connected to the word line WLR. The invention realizes word line separation by using the structure of two word lines, has simple calculation logic, accelerates the calculation process and reduces the structure area.

Description

Four-transistor memory computing device based on separated word lines
Technical Field
The invention relates to the technical field of memory computing, in particular to a four-transistor memory computing device based on a separation word line.
Background
Deep Convolutional Neural Networks (DCNNs) demonstrate that inference accuracy can be further improved, and deep learning is moving towards edge computation. The most common operation in DCNNs is Multiplication and Accumulation (MAC), which controls power and delay. The MAC operation has high regularity and parallelism, and is therefore very suitable for hardware acceleration. However, the amount of memory access severely limits the energy efficiency of conventional digital accelerators, and thus memory computing (IMC) is becoming increasingly attractive for DCNN acceleration.
The existing memory computing unit is basically based on six or more tubes, and obviously, the structure area is large, the computing process is slow, and the power consumption is also large.
Disclosure of Invention
The invention aims to provide a four-transistor memory computing device based on a separated word line, which is used for accelerating a computing process and reducing the structural area.
To achieve the above object, the present invention provides a split word line based quad-memory computing device, comprising:
the device comprises a column decoding driver, a row decoding driver, a memory computing array, n switches and n analog-digital converters; the in-memory computation array comprises m multiplied by n in-memory computation bit units arranged in an array;
the n first bit line ends of the column decoding drivers are respectively connected with the n bit lines BL; n second bit line ends of the column decoding drivers are respectively connected with n bit lines BLB;
m first word line ends of the row decoding driver are respectively connected with m word lines WLL; m second word line ends of the row decoding driver are respectively connected with m word lines WLR;
the first bit line ends corresponding to the m memory computing bit units in the j +1 th column are all connected with the j th bit line BL, the second bit line ends corresponding to the m memory computing bit units in the j +1 th column are all connected with the j th bit line BLB, the first word line ends corresponding to the n memory computing bit units in the i +1 th row are all connected with the i th word line WLL, and the second word line ends corresponding to the n memory computing bit units in the i +1 th row are all connected with the i th word line WLR; wherein i is a positive integer greater than or equal to 0 and less than m, and j is a positive integer greater than or equal to 0 and less than n;
the jth switch is connected with a jth bit line BLB, and a first input end of the jth analog-digital converter is respectively connected with the jth switch and the jth bit line BL;
the in-memory computation bit cell comprises: a transistor T1, a transistor T2, a transistor T3, and a transistor T4;
the source of the transistor T1 and the source of the transistor T2 are both connected to a power supply, the drain of the transistor T1 is connected to the gate of the transistor T2, and the gate of the transistor T1 is connected to the drain of the transistor T2;
the source of the transistor T3 is connected to the bit line BL, the drain of the transistor T3 is connected to the gate of the transistor T1, and the gate of the transistor T3 is connected to the word line WLL;
the source of the transistor T4 is connected to the bit line BLB, the drain of the transistor T4 is connected to the gate of the transistor T2, and the gate of the transistor T4 is connected to the word line WLR.
Optionally, the transistor T1 and the transistor T2 are both PMOS, and the transistor T3 and the transistor T4 are both NMOS.
Alternatively, when the input activation signal is +1, the word line WLL is VDD, and the word line WLR is 0; when the input activation signal is-1, the word line WLL is 0, and the word line WLR is VDD; where VDD = 1V.
Optionally, when Q =0, the weight value is-1; when Q = VDD, the weight value is +1, where VDD =1V, and Q is a common point of the gate of the transistor T1, the drain of the transistor T2, and the drain of the transistor T3.
Optionally, when the input activation signal is +1 and the weight value is +1, the bit line BL is charged; when the input activation signal is-1 and the weight value is +1, discharging the bit line BLB; when the input activation signal is +1 and the weight value is-1, discharging the bit line BL; when the input activation signal is-1 and the weight value is-1, the bit line BLB is charged.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the memory computing bit unit of the invention adopts 4 transistors to carry out memory computing, further optimizes the array structure, reduces the array area, directly saves the computing circuit, realizes word line separation by using the structure of two word lines, further reduces the area of the memory computing device, and simultaneously realizes quick reasoning time and robustness.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a diagram of an in-memory computing device according to the present invention;
FIG. 2 is a diagram of a memory compute bitcell structure of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a four-transistor memory computing device based on a separated word line, which is used for accelerating a computing process and reducing the structural area.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
As shown in fig. 1, the present invention discloses a four-transistor memory computing device based on separated word lines, the device comprising: a column decoding driver II, a row decoding driver III, a memory computing array, n switches and n analog-digital converters; the in-memory calculation array comprises m multiplied by n in-memory calculation bit units arranged in an array. The column decoding driver has a column decoding function and a read/write bit line control function; the row decoding driver has a row decoding function and a word line driving function.
N first bit line ends of the column decoding driver II are respectively connected with n bit lines BL; n second bit line ends of the column decoding driver II are respectively connected with n bit lines BLB; the m first word line ends of the row decoding driver III are respectively connected with m word lines WLL; the m second word line ends of the row decoding driver III are respectively connected with m word lines WLR; the first bit line ends corresponding to the m in-memory calculation bit units (i) of the j +1 th column are all connected with the jth bit line BL (namely, the bit line BL [ j ]), the second bit line ends corresponding to the m in-memory calculation bit units (i) of the j +1 th column are all connected with the jth bit line BLB (namely, the bit line BLB [ j ]), the first word line ends corresponding to the n in-memory calculation bit units (i) of the i +1 th row are all connected with the ith word line WLL (namely, the bit line WLL [ i ]), and the second word line ends corresponding to the n in-memory calculation bit units (i) of the i +1 th row are all connected with the ith word line WLR (namely, the bit line WLR [ i ]); wherein i is a positive integer greater than or equal to 0 and less than m, and j is a positive integer greater than or equal to 0 and less than n; the jth switch (i.e., SW [ j ]) is connected to the jth bit line BLB (i.e., BLBj), the first input terminal of the jth analog-to-digital converter (i.e., Qj) is connected to the jth switch and the jth bit line BL, respectively, the second input terminal of the jth analog-to-digital converter is connected to the given value Vref, and finally the accumulated voltage is output through the output terminal of the jth analog-to-digital converter.
The memory computing device of the present invention can operate in two modes: the first is the access mode of the weight access. The second is the calculation mode for binary multiply-accumulate bMAC operation.
In the access mode, the read/write operations of the memory computing device are the same as the read/write operations of a conventional 6T SRAM cell. Namely: the address signal is decoded by the row decoding driver and then input to the word line WLL [ i ] and the word line WLR [ i ], and the (i + 1) th row n memory computing bit units are selected, the bit line BL [ j ] and the bit line BLB [ j ] output by the column decoding driver select the (j + 1) th row m memory computing bit units, the bit line BL [ j ] and the bit line BLB [ j ] close the switch SW [ j ], so that the voltage calculated by the (i + 1) th row j +1 th row memory computing bit units is input to the bit line BL [ j ] and combined with the voltage on the bit line BLB [ j ], and then output through the output end of the analog-digital converter Q [ j ].
In the calculation mode, m rows of memory bit units (phi) are activated simultaneously, each row of Input activation signals (Input) are pre-coded into a word line (WLL) and a Word Line (WLR), and weight values are stored in each memory bit unit (phi). When the Input activation signal Input [ i ] is '+ 1', the corresponding word line WLL [ i ] is set to '1', and the corresponding word line WLR [ i ] is set to '0'. When an activation signal Input [ i ] = '-1' is Input, the word line WLL [ i ] is set to 0, and the word line WLR [ i ] is set to 1. Bit line BL [ j ] and bit line BLB [ j ] are precharged to 0.5V. When the result of multiplication between the Input activation signal Input [ i ] and the weight value is "+ 1", the bit line BL [ j ] or the bit line BLB [ j ] is charged, and when the result is '-1', the bit line BL [ j ] or the bit line BLB [ j ] is discharged, and the result of multiplication in the j +1 th column is accumulated in the form of voltage on the bit line BL [ j ] and the bit line BLB [ j ]. Then opening switch SW [ j ], merging the voltages on bit line BL [ j ] and bit line BLB [ j ] to generate total voltage value VBL [ j ] to be outputted in digital mode through the output end of analog-digital converter Q [ j ]. According to the invention, a plurality of bit lines BL are output in parallel, and complete parallel calculation and high throughput are realized.
As shown in fig. 2, the in-memory compute bit cell (r) includes: a transistor T1, a transistor T2, a transistor T3, and a transistor T4; the source of the transistor T1 and the source of the transistor T2 are both connected to the power supply VDD, the drain of the transistor T1 is connected to the gate of the transistor T2, and the gate of the transistor T1 is connected to the drain of the transistor T2; the source of the transistor T3 is connected to the bit line BL, the drain of the transistor T3 is connected to the gate of the transistor T1, and the gate of the transistor T3 is connected to the word line WLL; the source of the transistor T4 is connected to the bit line BLB, the drain of the transistor T4 is connected to the gate of the transistor T2, and the gate of the transistor T4 is connected to the word line WLR. Q is a common point of the gate of the transistor T1, the drain of the transistor T2, and the drain of the transistor T3, and QB is a common point of the gate of the transistor T2, the drain of the transistor T1, and the drain of the transistor T4.
The in-memory calculation bit unit (i) comprises the following steps: the first step of pre-charging makes the bit line BL [ j ] and bit line BLB [ j ] charged to 0.5V, the second step of charging is closed, the input activation signal is transmitted to the word line WLL [ i ]/WLR [ i ] through the row decoding driver, the multiplication result of the input activation signal and the weighted value generates charging and discharging on the bit line BL [ j ]/BLB [ j ]; and thirdly, performing analog-to-digital conversion by an analog-to-digital converter Q [ j ] and outputting a result, wherein a specific multiplication and accumulation operand table is shown in table 1.
TABLE 1 multiply-accumulate operand table
Figure DEST_PATH_IMAGE001
As shown in table 1, Input is an Input activation signal, Weight is a Weight value, value is a voltage value obtained by combining two parts, and Q is a common point of the gate of the transistor T1, the drain of the transistor T2, and the drain of the transistor T3. In table 1, the value of the Input activation signal Input is represented by a combination of high and low levels of the word line WLL and the word line WLR, and when the Input activation signal is +1, the terminal voltage of the word line WLL is VDD (1V), and the word line WLR is 0V; when the input activation signal is-1, the terminal voltage of a word line WLL is 0V, and the word line WLR is VDD; when the input activation signal is 0, word lines WLL and WLR are both VRST, i.e., 0.5V. When Q =0, the weight value is-1; when Q = VDD, the weight value is + 1. When the input activation signal is +1 and the weight value is +1, charging the bit line BL, wherein the value voltage value is 1; when the input activation signal is-1 and the weight value is +1, discharging the bit line BLB, wherein the value voltage value is-1; when the input activation signal is +1 and the weight value is-1, discharging the bit line BL, wherein the value voltage value is-1; when the input activation signal is-1 and the weight value is-1, the bit line BLB is charged, and the value voltage is 1.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to assist in understanding the core concepts of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (5)

1. A split word line based quad-memory computing device, the device comprising:
the device comprises a column decoding driver, a row decoding driver, a memory computing array, n switches and n analog-digital converters; the in-memory computation array comprises m multiplied by n in-memory computation bit units arranged in an array;
the n first bit line ends of the column decoding drivers are respectively connected with the n bit lines BL; n second bit line ends of the column decoding drivers are respectively connected with n bit lines BLB;
m first word line ends of the row decoding driver are respectively connected with m word lines WLL; m second word line ends of the row decoding driver are respectively connected with m word lines WLR;
the first bit line ends corresponding to the m memory computing bit units in the j +1 th column are all connected with the j th bit line BL, the second bit line ends corresponding to the m memory computing bit units in the j +1 th column are all connected with the j th bit line BLB, the first word line ends corresponding to the n memory computing bit units in the i +1 th row are all connected with the i th word line WLL, and the second word line ends corresponding to the n memory computing bit units in the i +1 th row are all connected with the i th word line WLR; wherein i is a positive integer greater than or equal to 0 and less than m, and j is a positive integer greater than or equal to 0 and less than n;
the jth switch is connected with a jth bit line BLB, and a first input end of the jth analog-digital converter is respectively connected with the jth switch and the jth bit line BL;
the in-memory computation bit cell comprises: a transistor T1, a transistor T2, a transistor T3, and a transistor T4;
the source of the transistor T1 and the source of the transistor T2 are both connected to a power supply, the drain of the transistor T1 is connected to the gate of the transistor T2, and the gate of the transistor T1 is connected to the drain of the transistor T2;
the source of the transistor T3 is connected to the bit line BL, the drain of the transistor T3 is connected to the gate of the transistor T1, and the gate of the transistor T3 is connected to the word line WLL;
the source of the transistor T4 is connected to the bit line BLB, the drain of the transistor T4 is connected to the gate of the transistor T2, and the gate of the transistor T4 is connected to the word line WLR.
2. The split word line-based quad-transistor memory computing device of claim 1, wherein the transistors T1 and T2 are both PMOS and the transistors T3 and T4 are both NMOS.
3. The split-word-line-based quad-memory computing device of claim 1, wherein when the input activation signal is +1, word line WLL is VDD and word line WLR is 0; when the input activation signal is-1, the word line WLL is 0, and the word line WLR is VDD; where VDD = 1V.
4. The discrete word line-based quad-memory computing device of claim 1, wherein when Q =0, the weight value is-1; when Q = VDD, the weight value is +1, where VDD =1V, and Q is a common point of the gate of the transistor T1, the drain of the transistor T2, and the drain of the transistor T3.
5. The split-word-line-based quad-transistor memory computing device of claim 1, wherein when an input activation signal is +1 and a weight value is +1, then charging a Bit Line (BL); when the input activation signal is-1 and the weight value is +1, discharging the bit line BLB; when the input activation signal is +1 and the weight value is-1, discharging the bit line BL; when the input activation signal is-1 and the weight value is-1, the bit line BLB is charged.
CN202110190886.0A 2021-02-20 2021-02-20 Four-transistor memory computing device based on separated word lines Pending CN112558922A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110190886.0A CN112558922A (en) 2021-02-20 2021-02-20 Four-transistor memory computing device based on separated word lines

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110190886.0A CN112558922A (en) 2021-02-20 2021-02-20 Four-transistor memory computing device based on separated word lines

Publications (1)

Publication Number Publication Date
CN112558922A true CN112558922A (en) 2021-03-26

Family

ID=75034379

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110190886.0A Pending CN112558922A (en) 2021-02-20 2021-02-20 Four-transistor memory computing device based on separated word lines

Country Status (1)

Country Link
CN (1) CN112558922A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530176A (en) * 2022-04-25 2022-05-24 中科南京智能技术研究院 Distributed bit line compensation digital-analog mixed memory computing array
CN114882921A (en) * 2022-07-08 2022-08-09 中科南京智能技术研究院 Multi-bit computing device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10510386B1 (en) * 2018-08-29 2019-12-17 National Tsing Hua University Dynamic bit-line clamping circuit for computing-in-memory applications and clamping method thereof
CN111883191A (en) * 2020-07-14 2020-11-03 安徽大学 10T SRAM cell, and memory logic operation and BCAM circuit based on 10T SRAM cell
CN111899775A (en) * 2020-07-24 2020-11-06 安徽大学 SRAM memory cell circuit capable of realizing multiple logic functions and BCAM operation
CN112036562A (en) * 2020-11-05 2020-12-04 中科院微电子研究所南京智能技术研究院 Bit cell applied to memory computation and memory computation array device
CN112071344A (en) * 2020-09-02 2020-12-11 安徽大学 Circuit for improving linearity and consistency of calculation in memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10510386B1 (en) * 2018-08-29 2019-12-17 National Tsing Hua University Dynamic bit-line clamping circuit for computing-in-memory applications and clamping method thereof
CN111883191A (en) * 2020-07-14 2020-11-03 安徽大学 10T SRAM cell, and memory logic operation and BCAM circuit based on 10T SRAM cell
CN111899775A (en) * 2020-07-24 2020-11-06 安徽大学 SRAM memory cell circuit capable of realizing multiple logic functions and BCAM operation
CN112071344A (en) * 2020-09-02 2020-12-11 安徽大学 Circuit for improving linearity and consistency of calculation in memory
CN112036562A (en) * 2020-11-05 2020-12-04 中科院微电子研究所南京智能技术研究院 Bit cell applied to memory computation and memory computation array device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114530176A (en) * 2022-04-25 2022-05-24 中科南京智能技术研究院 Distributed bit line compensation digital-analog mixed memory computing array
CN114882921A (en) * 2022-07-08 2022-08-09 中科南京智能技术研究院 Multi-bit computing device

Similar Documents

Publication Publication Date Title
CN112151091B (en) 8T SRAM unit and memory computing device
CN112151092B (en) Storage unit, storage array and in-memory computing device based on 4-pipe storage
CN112992223B (en) Memory computing unit, memory computing array and memory computing device
CN111816232B (en) In-memory computing array device based on 4-pipe storage structure
CN111816231B (en) Memory computing device with double-6T SRAM structure
CN113255904B (en) Voltage margin enhanced capacitive coupling storage integrated unit, subarray and device
CN113035251B (en) Digital memory computing array device
CN112558919B (en) Memory computing bit unit and memory computing device
CN112036562B (en) Bit cell applied to memory computation and memory computation array device
CN114089950B (en) Multi-bit multiply-accumulate operation unit and in-memory calculation device
CN113257306B (en) Storage and calculation integrated array and accelerating device based on static random access memory
CN114546335B (en) Memory computing device for multi-bit input and multi-bit weight multiplication accumulation
CN112884140B (en) Multi-bit memory internal computing unit, array and device
CN113782072B (en) Multi-bit memory computing circuit
CN112558922A (en) Four-transistor memory computing device based on separated word lines
CN114300012B (en) Decoupling SRAM memory computing device
CN113823343B (en) Separated computing device based on 6T-SRAM
CN113838504B (en) Single-bit memory computing circuit based on ReRAM
CN112232502B (en) Same or memory unit and memory array device
CN113077050B (en) Digital domain computing circuit device for neural network processing
CN114895869B (en) Multi-bit memory computing device with symbols
CN113986195B (en) Delay type single-bit memory computing unit and device
CN116204490A (en) 7T memory circuit and multiply-accumulate operation circuit based on low-voltage technology
CN113391786B (en) Computing device for multi-bit positive and negative weights
CN113258910B (en) Computing device based on pulse width modulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210326