CN113642706A - Neuron network unit, convolution operation module and convolution neural network - Google Patents

Neuron network unit, convolution operation module and convolution neural network Download PDF

Info

Publication number
CN113642706A
CN113642706A CN202110913994.6A CN202110913994A CN113642706A CN 113642706 A CN113642706 A CN 113642706A CN 202110913994 A CN202110913994 A CN 202110913994A CN 113642706 A CN113642706 A CN 113642706A
Authority
CN
China
Prior art keywords
transistor
readout
isolation branch
convolution operation
reverse
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110913994.6A
Other languages
Chinese (zh)
Other versions
CN113642706B (en
Inventor
陈静
赵瑞勇
谢甜甜
王青
吕迎欢
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Institute of Microsystem and Information Technology of CAS
Original Assignee
Shanghai Institute of Microsystem and Information Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Microsystem and Information Technology of CAS filed Critical Shanghai Institute of Microsystem and Information Technology of CAS
Priority to CN202110913994.6A priority Critical patent/CN113642706B/en
Publication of CN113642706A publication Critical patent/CN113642706A/en
Application granted granted Critical
Publication of CN113642706B publication Critical patent/CN113642706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Complex Calculations (AREA)
  • Logic Circuits (AREA)

Abstract

本发明提供了一种神经元网络单元,包括静态随机存储单元、正向读出隔离支路、以及反向读出隔离支路;所述静态随机存储单元包括电学串联的第一传输晶体管和第二传输晶体管,以及并联在第一和第二传输晶体管之间的两个对置互锁的第一和第二反相器,所述正向读出隔离支路连接至第一传输晶体管与两个对置互锁的反相器之间,用于根据静态随机存储单元存储的控制信号,将一外部输入的数字电压转化为模拟电流输出;所述反向读出隔离支路连接至第二传输晶体管与两个对置互锁的反相器之间,用于根据静态随机存储单元存储的控制信号,将一外部输入的数字电压转化为模拟电流输出。

Figure 202110913994

The present invention provides a neuron network unit, comprising a static random access memory unit, a forward readout isolation branch, and a reverse readout isolation branch; the static random access memory unit includes a first transmission transistor and a first transmission transistor electrically connected in series. two pass transistors, and two opposing interlocked first and second inverters in parallel between the first and second pass transistors, the forward read isolation branch being connected to the first pass transistor and the two Between two oppositely interlocked inverters, it is used to convert an externally input digital voltage into an analog current output according to the control signal stored in the static random access memory unit; the reverse readout isolation branch is connected to the second Between the transmission transistor and the two oppositely interlocked inverters, it is used to convert an externally input digital voltage into an analog current output according to the control signal stored in the static random access memory unit.

Figure 202110913994

Description

Neuron network unit, convolution operation module and convolution neural network
Technical Field
The invention relates to the field of integrated circuit design, in particular to a neuron network unit, a convolution operation module and a convolution neural network.
Background
With the development of the big data era, artificial intelligence has become a very important subject field, and a neural network special chip is an important hardware tool for a computing system to efficiently complete neural network computation. Traditional computing architectures employ a von neumann architecture with separate computation and storage, and memory bandwidth and memory power consumption in von neumann architectures have begun to dominate computing bandwidth and energy under big data trends. A significant portion of this power consumption is spent on memory and data handling by the compute unit. The memory calculation which takes the memory as the leading part reduces huge time and power consumption expense brought by data transportation to a great extent through the combination of the neural network algorithm and the storage hardware architecture.
The conventional convolutional neural network memory (CIM SRAM) uses the word line and bit line architecture of the conventional SRAM, so that when data is input, the data received by each row of word lines is the same set of input data. Although different convolution kernels can be used for performing the multiply-add operation when the convolution operation is performed on the same group of data, based on the current simple convolution neural network algorithm, the maximum number of 16-20 groups of convolution kernels are used for the data of the same group of convolution windows, so that for the SRAM array, the resource waste is caused by the fact that only 16-20 columns of bit lines are used for performing the operation.
Disclosure of Invention
The technical problem to be solved by the invention is to provide a neuron network unit, a convolution operation module and a convolution neural network, so that the resource utilization rate of an array unit is improved, and the power consumption is reduced.
In order to solve the above problems, the present invention provides a neural network unit, which includes a static random access memory unit, a forward sense isolation branch, and a reverse sense isolation branch; the static random access memory unit comprises a first transmission transistor and a second transmission transistor which are electrically connected in series, and two opposite interlocked first inverters and second inverters which are connected between the first transmission transistor and the second transmission transistor in parallel, wherein the forward direction reading isolation branch circuit is connected between the first transmission transistor and the two opposite interlocked inverters and is used for converting an externally input digital voltage into an analog current to be output according to a control signal stored in the static random access memory unit; the reverse readout isolation branch is connected between the second transmission transistor and the two opposite interlocked inverters and used for converting an externally input digital voltage into an analog current according to a control signal stored in the static random access memory unit and outputting the analog current.
Optionally, the forward read isolation branch comprises a first readout transistor and a second readout transistor connected in series; the grid electrode of the first reading transistor is connected between the first transmission transistor and the two oppositely interlocked inverters and is used as the control end of the forward reading isolation branch circuit, and the drain/source electrode of the first reading transistor is connected with the working voltage; the grid electrode of the second reading transistor is used as an external input end, and the source/drain electrode of the second reading transistor is used as the output end of the forward reading isolation branch.
Optionally, the reverse read isolation branch includes a third readout transistor and a fourth readout transistor connected in series; the grid electrode of the third reading transistor is connected between the second transmission transistor and the two oppositely interlocked inverters and is used as the control end of the reverse reading isolation branch circuit, and the drain/source electrode of the third reading transistor is grounded; the grid electrode of the fourth reading transistor is used as an external input end, and the source/drain electrode of the fourth reading transistor is used as the output end of the reverse reading isolation branch circuit.
The invention also provides a convolution operation module, which comprises a convolution operation sub-block and a pulse frequency quantization unit; the convolution operation sub-block comprises an array formed by N multiplied by N neuron network units, wherein N is a positive integer, and the array comprises: the first transmission transistor and the second transmission transistor of the neuron network unit are connected with a write word line of the array, a drain/source electrode of the first transmission transistor and a source/drain electrode of the second transmission transistor are respectively connected with a bit line and a reverse bit line, the external input end of the forward isolation branch is connected with a row word line, the external input end of the reverse isolation branch is connected with a column word line, the output end of the forward isolation branch is connected with a row current output, and the output end of the reverse isolation branch is connected with a column current output; the pulse frequency quantization unit is connected with the output end of the convolution operation sub-block, converts the current output in the convolution operation sub-block into a pulse signal to carry out pulse frequency counting, and further converts the pulse signal into multi-bit digital output.
Optionally, the pulse frequency quantization unit includes an amplifier, a comparator, and a counter connected in series: the amplifier is connected with an integrating capacitor in parallel and is connected with the comparator in series to serve as an input end of the pulse frequency quantization unit; the output end of the comparator is further connected to two ends of the integrating capacitor through a discharge control switch, and the discharge control switch is used for discharging the integrating capacitor when the comparator outputs a high level; the integrating capacitor is charged by analog current input and then amplified by an amplifier to cause pulse output of a comparator, so that the current value is equivalently converted into pulse frequency; the counter is connected with the comparator in series and counts the number of pulses to form multi-bit digital output.
The invention also provides a convolution neural network which comprises an array formed by the convolution operation modules with the number of N multiplied by N, wherein N is a positive integer.
According to the invention, each row of word lines receives two groups of different data and respectively carries out convolution operation, more groups of convolution kernels can be used for the data of the same group of convolution windows, and resource waste is avoided.
Drawings
Fig. 1 is a schematic circuit structure diagram of a neural network unit according to an embodiment of the present invention.
Fig. 2 is a schematic circuit diagram of a convolution operation sub-block in a convolution operation module according to an embodiment of the present invention.
Fig. 3 is a schematic circuit diagram of a pulse quantization unit in a convolution operation module according to an embodiment of the present invention.
Detailed Description
The following describes in detail specific embodiments of the sram cell, the convolution operation module, and the convolution neural network according to the present invention with reference to the accompanying drawings.
In the following description, the source and drain connections of the transistor are described alternatively, and it should be noted that in other embodiments, the source and drain connections of the transistor may be equivalently replaced without affecting the actual electrical function.
Fig. 1 is a schematic circuit diagram of a neural network unit according to an embodiment of the present invention, which includes a sram cell M1, a forward sense isolation branch B1, and a reverse sense isolation branch B2. The static random access memory cell M1 is a 6T structure including a first pass transistor T1 and a second pass transistor T2 electrically connected in series, and two opposing interlocked first and second inverters D1 and D2 connected in parallel between the first and second pass transistors T1 and T2.
In this particular embodiment, the forward sense isolation branch B1 includes a first sense transistor R1 and a second sense transistor R2 connected in series; the gate of the first readout transistor R1 is connected between the first transfer transistor T1 and the two opposite interlocked inverters as the control terminal of the forward read isolation branch, and the drain of the first readout transistor R1 is connected to the operating voltage VDD; the gate of the second sense transistor R2 serves as the external input and the source of the second sense transistor R2 serves as the output of the forward sense isolation branch. The forward sense isolation branch B1 is connected between the first pass transistor and two opposite interlocked inverters, and is used for converting an externally input digital voltage into an analog current output according to a control signal stored in the SRAM cell M1.
In this embodiment, the reverse read isolation branch B2 includes a third read transistor R3 and a fourth read transistor R4 connected in series. The gate of the third readout transistor R3 is connected between the second transmission transistor T2 and the two opposite interlocked inverters, and serves as a control terminal of the reverse read isolation branch, and the drain of the third readout transistor R3 is grounded. The gate of the fourth pass transistor R4 serves as the external input and the source of the fourth sense transistor R4 serves as the output of the reverse sense isolation branch. The reverse sense isolation branch is connected between the second pass transistor T2 and two opposite interlocked inverters, and is used for converting an externally input digital voltage into an analog current output according to a control signal stored in the sram cell M1.
Next, a specific embodiment of the convolution operation module of the present invention will be given. The module comprises a convolution operation sub-block and a pulse frequency quantization unit. Fig. 2 is a schematic circuit diagram of a convolution operation sub-block in a convolution operation module according to an embodiment of the present invention. The convolution operation sub-block comprises an array formed by N multiplied by N neuron network units. In the present embodiment, 3 × 3 neuron network units C11 to C33 are described as an example. The first transmission transistor T1 and the second transmission transistor T2 of the neuron network unit are connected with a write word line WWL of the array, the drain electrode of the first transmission transistor T1 and the source electrode of the second transmission transistor are respectively connected with a bit line BL and a reverse bit line BLB, the external input end of the forward isolation branch is connected with a row word line LWL, the external input end of the reverse isolation branch is connected with a column word line CWL, the output end of the forward isolation branch is connected with a row current output Lo, and the output end of the reverse isolation branch is connected with a column current output Co. The array can realize convolution operation of a 4 x 8 input matrix by a 3 x 3 convolution kernel in three periods.
Fig. 3 is a schematic circuit diagram of a pulse quantization unit in a convolution operation module according to an embodiment of the present invention. The pulse frequency quantization unit is connected with the output end of the convolution operation sub-block, converts the current output in the convolution operation sub-block into a pulse signal to carry out pulse frequency counting, and further converts the pulse signal into multi-bit digital output. As one embodiment of the foregoing, as shown in fig. 2, the pulse frequency quantization unit includes an amplifier 21, a comparator 22, and a counter 23 connected in series. The amplifier 21 is connected in parallel with an integrating capacitor C and in series with a comparator 22 as an input of the pulse frequency quantization unit. The output end of the comparator 22 is further connected to two ends of the integrating capacitor C through a discharge control switch K, where the discharge control switch K is used to discharge the integrating capacitor C when the comparator 22 outputs a high level 1; the integrating capacitor C is charged by an analog current input and amplified by the amplifier 21 to generate a pulse output of the comparator 22, thereby equivalently converting the current value into a pulse frequency. Specifically, the inverted output of the comparator 22 is a discharge signal of the integrating capacitor C, the output of the comparator 22 is inverted by 0 → 1 each time the integrating capacitor C is charged to the inverting voltage of the comparator 22 by the analog current input, and when the output of the comparator 22 is 1, the integrating capacitor C is discharged, the output of the comparator 22 is 1 → 0, and further the comparator 22 forms a pulse output, and the frequency of the pulse formation is faster as the analog input current is larger. The counter 23 is connected in series with the comparator 22 to count the number of pulses and form a multi-bit digital output.
The convolution operation sub-block and the convolution operation module composed of the pulse frequency quantization unit and composed of the convolution operation sub-block and the convolution operation module composed of the convolution operation sub-block of fig. 2 and fig. 3 form an N × N array, namely a convolution neural network, and the acceleration calculation of the convolution neural network can be carried out.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1.一种神经元网络单元,包括静态随机存储单元、正向读出隔离支路、以及反向读出隔离支路;1. A neuron network unit, comprising a static random storage unit, a forward readout isolation branch, and a reverse readout isolation branch; 所述静态随机存储单元包括电学串联的第一传输晶体管和第二传输晶体管,以及并联在第一和第二传输晶体管之间的两个对置互锁的第一和第二反相器,其特征在于;The SRAM cell includes a first pass transistor and a second pass transistor electrically connected in series, and two opposing interlocked first and second inverters connected in parallel between the first and second pass transistors, which characterized by; 所述正向读出隔离支路连接至第一传输晶体管与两个对置互锁的反相器之间,用于根据静态随机存储单元存储的控制信号,将一外部输入的数字电压转化为模拟电流输出。The forward readout isolation branch is connected between the first transfer transistor and the two oppositely interlocked inverters, and is used to convert an externally input digital voltage into a digital voltage according to the control signal stored in the SRAM cell. Analog current output. 所述反向读出隔离支路连接至第二传输晶体管与两个对置互锁的反相器之间,用于根据静态随机存储单元存储的控制信号,将一外部输入的数字电压转化为模拟电流输出。The reverse readout isolation branch is connected between the second transfer transistor and the two oppositely interlocked inverters, and is used to convert an externally input digital voltage into a digital voltage according to the control signal stored in the SRAM cell. Analog current output. 2.根据权利要求1所述的神经元网络单元,其特征在于,所述正向读隔离支路包括串联的第一读出晶体管和第二读出晶体管;第一读出晶体管的栅极连接至第一传输晶体管与两个对置互锁的反相器之间,作为正向读隔离支路的控制端,第一读出晶体管的漏/源极接工作电压;第二读出晶体管的栅极作为外部输入端,第二读出晶体管的源/漏极作为正向读出隔离支路的输出端。2. The neuron network unit according to claim 1, wherein the forward read isolation branch comprises a first readout transistor and a second readout transistor connected in series; the gate of the first readout transistor is connected to Between the first transfer transistor and the two oppositely interlocked inverters, as the control terminal of the forward read isolation branch, the drain/source of the first readout transistor is connected to the working voltage; The gate is used as an external input terminal, and the source/drain of the second readout transistor is used as the output terminal of the forward readout isolation branch. 3.根据权利要求1所述的神经元网络单元,其特征在于,所述反向读隔离支路包括串联的第三读出晶体管和第四读出晶体管;第三读出晶体管的栅极连接至第二传输晶体管与两个对置互锁的反相器之间,作为反向读隔离支路的控制端,第三读出晶体管的漏/源极接地;第四读出晶体管的栅极作为外部输入端,第四读出晶体管的源/漏极作为反向读出隔离支路的输出端。3. The neuron network unit according to claim 1, wherein the reverse read isolation branch comprises a third readout transistor and a fourth readout transistor connected in series; the gate of the third readout transistor is connected to To between the second transfer transistor and the two oppositely interlocked inverters, as the control terminal of the reverse read isolation branch, the drain/source of the third readout transistor is grounded; the gate of the fourth readout transistor As an external input terminal, the source/drain of the fourth readout transistor acts as an output terminal of the reverse readout isolation branch. 4.一种卷积运算模块,其特征在于,所述模块包括卷积运算子块和脉冲频率量化单元;4. a convolution operation module, it is characterised in that the module comprises a convolution operation sub-block and a pulse frequency quantization unit; 所述卷积运算子块包括N×N个权利要求1所述的神经元网络单元所组成的阵列,N为正整数,其中:The convolution operation sub-block includes an array composed of N×N neuron network units according to claim 1, where N is a positive integer, wherein: 神经元网络单元的第一和第二传输晶体管连接阵列的写字线,第一传输晶体管的漏/源极和第二传输晶体管源/漏极分别连接位线和反向位线,正向隔离支路的外部输入端连接行字线,反向隔离支路的外部输入端连接列字线,正向隔离支路的输出端连接行电流输出,反向隔离支路的输出端连接列电流输出;The first and second pass transistors of the neuron network unit are connected to the write word line of the array, the drain/source of the first pass transistor and the source/drain of the second pass transistor are connected to the bit line and the reverse bit line respectively, and the positive isolation branch The external input terminal of the circuit is connected to the row word line, the external input terminal of the reverse isolation branch is connected to the column word line, the output terminal of the forward isolation branch is connected to the row current output, and the output terminal of the reverse isolation branch is connected to the column current output; 所述脉冲频率量化单元与所述卷积运算子块的输出端连接,将卷积运算子块中的电流输出转化为脉冲信号进行脉冲频率计数,进而转换成多比特数字输出。The pulse frequency quantization unit is connected to the output end of the convolution operation sub-block, and converts the current output in the convolution operation sub-block into a pulse signal for pulse frequency counting, and then into a multi-bit digital output. 5.根据权利要求4所述的卷积运算模块,其特征在于,所述脉冲频率量化单元包括串联的放大器、比较器、以及计数器:5. The convolution operation module according to claim 4, wherein the pulse frequency quantization unit comprises an amplifier, a comparator and a counter connected in series: 所述放大器与一积分电容并联,并与比较器串联,作为脉冲频率量化单元的输入端;The amplifier is connected in parallel with an integrating capacitor and is connected in series with the comparator as the input end of the pulse frequency quantization unit; 所述比较器的输出端进一步通过一放电控制开关连接至积分电容的两端,所述放电控制开关用于在所述比较器输出高电平时对积分电容进行放电;The output end of the comparator is further connected to both ends of the integration capacitor through a discharge control switch, and the discharge control switch is used to discharge the integration capacitor when the comparator outputs a high level; 上述积分电容被模拟电流输入充电后经放大器放大,引起比较器的脉冲式输出,从而将电流值等效转化为脉冲频率;The above-mentioned integrating capacitor is charged by the analog current input and then amplified by the amplifier, causing the pulse output of the comparator, thereby converting the current value into a pulse frequency equivalently; 所述计数器与所述比较器串联,对脉冲个数进行计数进而形成多比特数字输出。The counter is connected in series with the comparator to count the number of pulses to form a multi-bit digital output. 6.一种卷积神经网络,包括N×N个权利要求4所述的卷积运算模块所组成的阵列,N为正整数。6. A convolutional neural network, comprising an array of N*N convolution operation modules according to claim 4, where N is a positive integer.
CN202110913994.6A 2021-08-10 2021-08-10 A convolution operation device and a convolution neural network system Active CN113642706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110913994.6A CN113642706B (en) 2021-08-10 2021-08-10 A convolution operation device and a convolution neural network system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110913994.6A CN113642706B (en) 2021-08-10 2021-08-10 A convolution operation device and a convolution neural network system

Publications (2)

Publication Number Publication Date
CN113642706A true CN113642706A (en) 2021-11-12
CN113642706B CN113642706B (en) 2024-12-27

Family

ID=78420503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110913994.6A Active CN113642706B (en) 2021-08-10 2021-08-10 A convolution operation device and a convolution neural network system

Country Status (1)

Country Link
CN (1) CN113642706B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110717580A (en) * 2019-09-27 2020-01-21 东南大学 Computational Array Based on Voltage Modulation for Binarized Neural Networks
US20200125935A1 (en) * 2017-06-21 2020-04-23 Semiconductor Energy Laboratory Co., Ltd. Semiconductor device having neural network
CN111126579A (en) * 2019-11-05 2020-05-08 复旦大学 An in-memory computing device suitable for binary convolutional neural network computing
CN111652363A (en) * 2020-06-08 2020-09-11 中国科学院微电子研究所 Storage and calculation integrated circuit
CN112232502A (en) * 2020-12-17 2021-01-15 中科院微电子研究所南京智能技术研究院 A kind of XOR storage unit and storage array device
CN112435700A (en) * 2019-08-26 2021-03-02 意法半导体国际有限公司 In-memory compute, high density array
CN112562756A (en) * 2020-12-15 2021-03-26 中国科学院上海微系统与信息技术研究所 Radiation-resistant static random access memory cell and memory
CN112687308A (en) * 2020-12-29 2021-04-20 中国科学院上海微系统与信息技术研究所 Low-power consumption static random access memory unit and memory

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200125935A1 (en) * 2017-06-21 2020-04-23 Semiconductor Energy Laboratory Co., Ltd. Semiconductor device having neural network
CN112435700A (en) * 2019-08-26 2021-03-02 意法半导体国际有限公司 In-memory compute, high density array
CN110717580A (en) * 2019-09-27 2020-01-21 东南大学 Computational Array Based on Voltage Modulation for Binarized Neural Networks
CN111126579A (en) * 2019-11-05 2020-05-08 复旦大学 An in-memory computing device suitable for binary convolutional neural network computing
CN111652363A (en) * 2020-06-08 2020-09-11 中国科学院微电子研究所 Storage and calculation integrated circuit
CN112562756A (en) * 2020-12-15 2021-03-26 中国科学院上海微系统与信息技术研究所 Radiation-resistant static random access memory cell and memory
CN112232502A (en) * 2020-12-17 2021-01-15 中科院微电子研究所南京智能技术研究院 A kind of XOR storage unit and storage array device
CN112687308A (en) * 2020-12-29 2021-04-20 中国科学院上海微系统与信息技术研究所 Low-power consumption static random access memory unit and memory

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SYED AHMED AAMIR: ""An Accelerated LIF Neuronal Network Array for a Large-Scale Mixed-Signal Neuromorphic Architecture"", 《IEEE》, 27 June 2018 (2018-06-27) *
朱航涛: ""基于神经元晶体管和忆阻器的Hopfield神经网络及其在联想记忆中的应用"", 《西南大学学报(自然科学版)》, vol. 40, no. 2, 31 December 2018 (2018-12-31) *

Also Published As

Publication number Publication date
CN113642706B (en) 2024-12-27

Similar Documents

Publication Publication Date Title
US11948659B2 (en) Sub-cell, mac array and bit-width reconfigurable mixed-signal in-memory computing module
CN111816231B (en) Memory computing device with double-6T SRAM structure
CN113627601B (en) Subunit, MAC array and bit width reconfigurable analog-digital mixed memory internal computing module
CN114546335B (en) Memory computing device for multi-bit input and multi-bit weight multiplication accumulation
CN113255904B (en) Voltage margin enhanced capacitive coupling storage and computing integrated unit, sub-array and device
CN115048075A (en) SRAM (static random Access memory) storage and calculation integrated chip based on capacitive coupling
CN114830136B (en) Power efficient near-memory analog multiply and accumulate (MAC)
CN118034644B (en) A high-density and high-reliability in-memory computing circuit based on eDRAM
CN117636945B (en) 5-bit XOR and XOR accumulation circuit with sign bit, CIM circuit
CN112036562A (en) A bit unit applied to in-memory computing and a storage-computation array device
CN117271436B (en) SRAM-based current mirror complementary in-memory computing macro circuits and chips
CN113346895B (en) Simulation and storage integrated structure based on pulse cut-off circuit
CN117130978A (en) Charge domain in-memory calculation circuit and calculation method based on sparse tracking ADC
CN115910152A (en) Charge domain memory calculation circuit and calculation circuit with positive and negative number operation function
CN114974337A (en) A time-domain in-memory computing circuit based on spin magnetic random access memory
CN114895869B (en) Multi-bit memory computing device with symbols
CN118298872B (en) In-memory computing circuit with configurable input weight bit and chip thereof
Kim et al. A charge-domain 10T SRAM based in-memory-computing macro for low energy and highly accurate DNN inference
CN113642706B (en) A convolution operation device and a convolution neural network system
CN118093507A (en) Memory calculation circuit structure based on 6T-SRAM
CN117877553A (en) In-memory computing circuit for nonvolatile random access memory
CN119271172B (en) Charge domain signed multiplication, multi-bit multiplication and accumulation circuit and chip thereof
TWI788964B (en) Subunit, MAC array, bit width reconfigurable modulus hybrid in-memory computing module
CN114647398B (en) Carry bypass adder-based in-memory computing device
US20240134319A1 (en) Time-to-digital converter-based device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant