TW201741943A - Deep learning neural network classifier using non-volatile memory array - Google Patents

Deep learning neural network classifier using non-volatile memory array Download PDF

Info

Publication number
TW201741943A
TW201741943A TW106116163A TW106116163A TW201741943A TW 201741943 A TW201741943 A TW 201741943A TW 106116163 A TW106116163 A TW 106116163A TW 106116163 A TW106116163 A TW 106116163A TW 201741943 A TW201741943 A TW 201741943A
Authority
TW
Taiwan
Prior art keywords
memory cells
lines
rows
columns
synapses
Prior art date
Application number
TW106116163A
Other languages
Chinese (zh)
Other versions
TWI631517B (en
Inventor
法諾德 M. 巴亞特
郭昕婕
迪米崔 史楚寇夫
恩漢 杜
曉萬 陳
維平 蒂瓦里
馬克 萊坦
Original Assignee
超捷公司
加州大學董事會
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 超捷公司, 加州大學董事會 filed Critical 超捷公司
Publication of TW201741943A publication Critical patent/TW201741943A/en
Application granted granted Critical
Publication of TWI631517B publication Critical patent/TWI631517B/en

Links

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/54Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using elements simulating biological cells, e.g. neuron
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/08Address circuits; Decoders; Word-line control circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/12Programming voltage switching circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • G11C16/14Circuits for erasing electrically, e.g. erase voltage switching circuits
    • G11C16/16Circuits for erasing electrically, e.g. erase voltage switching circuits for erasing blocks, e.g. arrays, words, groups
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/3436Arrangements for verifying correct programming or erasure
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/04Detection or location of defective memory elements, e.g. cell constructio details, timing of test signals
    • G11C29/08Functional testing, e.g. testing during refresh, power-on self testing [POST] or distributed testing
    • G11C29/12Built-in arrangements for testing, e.g. built-in self testing [BIST] or interconnection details
    • G11C29/38Response verification devices
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2216/00Indexing scheme relating to G11C16/00 and subgroups, for features not directly covered by these groups
    • G11C2216/02Structural aspects of erasable programmable read-only memories
    • G11C2216/04Nonvolatile memory cell provided with a separate control gate for erasing the cells, i.e. erase gate, independent of the normal read control gate

Abstract

An artificial neural network device that utilizes one or more non-volatile memory arrays as the synapses. The synapses are configured to receive inputs and to generate therefrom outputs. Neurons are configured to receive the outputs. The synapses include a plurality of memory cells, wherein each of the memory cells includes spaced apart source and drain regions formed in a semiconductor substrate with a channel region extending there between, a floating gate disposed over and insulated from a first portion of the channel region and a non-floating gate disposed over and insulated from a second portion of the channel region. Each of the plurality of memory cells is configured to store a weight value corresponding to a number of electrons on the floating gate. The plurality of memory cells are configured to multiply the inputs by the stored weight values to generate the outputs.

Description

使用非揮發性記憶體陣列之深度學習類神經網路分類器 Deep learning neural network classifier using non-volatile memory array 【相關申請案之交互參考】[Reciprocal Reference of Related Applications]

本申請案主張於2016年5月17日申請之美國臨時申請案第62/337,760號的權利,且該案以引用方式併入本文中。 The present application claims the benefit of U.S. Provisional Application Serial No. 62/337,760, filed on Jan. 17, s.

本發明係關於神經網路。 The present invention relates to neural networks.

人工神經網路模擬生物神經網路(動物之中樞神經系統,特別是大腦),該等生物神經網路用以估計或接近可取決於大量輸入且通常未知的功能。人工神經網路通常包括在彼此之間交換訊息的互連「神經元」層。圖1例示人工神經網路,其中圓圈表示神經元之輸入或層。連接(稱為突觸)由箭頭表示,且具有可基於經驗調諧之數字權重。此使得神經網自適應輸入且能夠學習。一般而言,神經網路包括多個輸入層。通常存在一或多個神經元中間層,及提供神經網路之輸出的神經元輸出層。各位準的神經元基於自突觸接收的資料個別地或共同地作出決定。 Artificial neural networks simulate biological neural networks (the animal's central nervous system, particularly the brain) that are used to estimate or approximate functions that may depend on a large number of inputs and are generally unknown. Artificial neural networks typically include an interconnected "neuron" layer that exchanges information between each other. Figure 1 illustrates an artificial neural network in which a circle represents an input or layer of a neuron. Connections (called synapses) are represented by arrows and have digital weights that can be tuned based on experience. This allows the neural network to be adaptively input and able to learn. In general, a neural network includes multiple input layers. There is typically one or more intermediate layers of neurons, and a neuron output layer that provides an output of the neural network. The quasi-neurons make decisions individually or collectively based on data received from synapses.

在用於高效能資訊處理的人工神經網路的發展中之主要挑戰中的一者是缺乏適當硬體技術。事實上,實際神經網路依賴極大量的突觸,從而實現神經元之間的高連接性,亦即,極高的計算並行性。原則上,此種複雜性可利用數位超級電腦或專用圖形處理單元集 群來達成。然而,除高成本之外,此等途徑亦遭受與生物網路相比低劣的能量效率,因為它們執行低精度的類比計算,所以基本上消耗更少的能量。CMOS類比電路已用於人工神經網路,但大多數CMOS實施的突觸對於給定高數目的神經元及突觸而言已太龐大。 One of the major challenges in the development of artificial neural networks for high performance information processing is the lack of appropriate hardware technology. In fact, the actual neural network relies on a very large number of synapses, thereby achieving high connectivity between neurons, that is, extremely high computational parallelism. In principle, this complexity can be achieved with a digital supercomputer or a dedicated graphics processing unit set. The group came to an end. However, in addition to high costs, these approaches also suffer from inferior energy efficiencies compared to bio-networks because they perform low-precision analog calculations and therefore consume substantially less energy. CMOS analog circuits have been used in artificial neural networks, but most CMOS implemented synapses are too large for a given high number of neurons and synapses.

前述問題及需要藉由將一或多個非揮發性記憶體陣列用作突觸之人工神經網路裝置來解決。該神經網路裝置包括:第一複數個突觸,其經組態來接收第一複數個輸入,及自該第一複數個輸入產生第一複數個輸出;及第一複數個神經元,其經組態來接收該第一複數個輸出。該第一複數個突觸包括:複數個記憶體單元,其中該等記憶體單元之各者包括形成於半導體基材中之隔開的源極區及汲極區,通道區在該源極區與該汲極區之間延伸;浮閘,其設置於該通道區之一第一部分上方且與該第一部分絕緣;及非浮閘,其設置於該通道區之一第二部分上方且與該第二部分絕緣。該複數個記憶體單元之各者經組態來儲存對應於該浮閘上之數個電子的權重值。該複數個記憶體單元經組態來使該第一複數個輸入乘以所儲存的該等權重值,以產生該第一複數個輸出。 The foregoing problems and need to be addressed by using one or more non-volatile memory arrays as synaptic artificial neural network devices. The neural network device includes: a first plurality of synapses configured to receive a first plurality of inputs, and a first plurality of outputs from the first plurality of inputs; and a first plurality of neurons, The first plurality of outputs are configured to receive. The first plurality of synapses includes: a plurality of memory cells, wherein each of the memory cells includes a spaced apart source region and a drain region formed in a semiconductor substrate, the channel region being in the source region Extending from the drain region; a floating gate disposed above and insulated from the first portion of the channel region; and a non-floating gate disposed above the second portion of the channel region and The second part is insulated. Each of the plurality of memory cells is configured to store a weight value corresponding to a plurality of electrons on the floating gate. The plurality of memory cells are configured to multiply the first plurality of inputs by the stored weight values to generate the first plurality of outputs.

本發明的其他目的與特徵將藉由檢視說明書、申請專利範圍、及隨附圖式而變得顯而易見。 Other objects and features of the present invention will become apparent from the description and appended claims.

10‧‧‧記憶體單元 10‧‧‧ memory unit

12‧‧‧半導體基材/基材 12‧‧‧Semiconductor substrate/substrate

14‧‧‧源極區/源極 14‧‧‧Source Zone/Source

14a‧‧‧源極線/水平源極線 14a‧‧‧Source line/horizontal source line

16‧‧‧汲極區/汲極 16‧‧‧Bungee Area/Bungee

16a‧‧‧位元線 16a‧‧‧ bit line

16a1‧‧‧第一位元線 16a1‧‧‧first bit line

16a2‧‧‧第二位元線 16a2‧‧‧second bit line

18‧‧‧通道區 18‧‧‧Channel area

20‧‧‧浮閘 20‧‧‧Float

22‧‧‧控制閘 22‧‧‧Control gate

22b‧‧‧第二部分 22b‧‧‧Part II

22a‧‧‧第一部分/控制閘線/水平控制閘線/控制閘 22a‧‧‧Part 1 / Control gate / level control gate / control gate

22a1‧‧‧第一控制閘線 22a1‧‧‧First control gate

22a2‧‧‧第二控制閘線 22a2‧‧‧Second control gate

24‧‧‧中間絕緣體 24‧‧‧Intermediate insulator

26‧‧‧閘極氧化物 26‧‧‧gate oxide

28‧‧‧選擇閘 28‧‧‧Selection gate

28a‧‧‧水平選擇閘線 28a‧‧‧ horizontal selection gate line

28a1‧‧‧選擇閘線 28a1‧‧‧Selected brake line

28a2‧‧‧選擇閘線 28a2‧‧‧Selecting the brake line

30‧‧‧抹除閘 30‧‧‧ wipe the gate

30a‧‧‧抹除閘線 30a‧‧‧Erase the brake line

31‧‧‧數位-類比轉換器 31‧‧‧Digital-to-analog converter

32、32a~e‧‧‧VMM 32, 32a~e‧‧‧VMM

33‧‧‧非揮發性記憶體單元之陣列/記憶體陣列 33‧‧‧Array/memory array of non-volatile memory cells

34‧‧‧抹除閘及字線閘解碼器 34‧‧‧Erase gate and word line gate decoder

35‧‧‧控制閘解碼器 35‧‧‧Control gate decoder

36‧‧‧位元線解碼器 36‧‧‧ bit line decoder

37‧‧‧源極線解碼器 37‧‧‧Source Line Decoder

38‧‧‧差分求和運算放大器/求和運算放大器 38‧‧‧Differential summing operational amplifier/suming operational amplifier

39‧‧‧激活功能電路 39‧‧‧Activate the function circuit

50‧‧‧電流-電壓對數轉換器 50‧‧‧current-voltage logarithmic converter

52‧‧‧電壓-電流對數轉換器 52‧‧‧Voltage-current logarithmic converter

54‧‧‧參考Gnd的電流求和器 54‧‧‧Refer to Gnd's current summer

56‧‧‧參考Vdd的電流求和器 56‧‧‧Refer to Vdd's current summer

C1‧‧‧特徵圖譜 C1‧‧‧Characteristic map

C2‧‧‧特徵圖譜 C2‧‧‧Characteristic map

C3‧‧‧特徵圖譜 C3‧‧‧Characteristic map

CB1‧‧‧突觸 CB1‧‧‧ synapse

CB2‧‧‧突觸 CB2‧‧‧ synapse

CB3‧‧‧突觸 CB3‧‧‧ synapse

CB4‧‧‧突觸 CB4‧‧‧ synapse

CG‧‧‧控制閘線 CG‧‧‧Control gate

D‧‧‧共用的汲極區 D‧‧‧Shared bungee area

DAC 40‧‧‧數位-類比轉換器 DAC 40‧‧‧Digital-to-Analog Converter

EG‧‧‧抹除閘線 EG‧‧‧wipe the brake line

IComp 44‧‧‧電流比較器 IComp 44‧‧‧ Current Comparator

Iin‧‧‧傳入電流/輸入電流 Iin‧‧‧Incoming current/input current

Iin0‧‧‧傳入電流/輸入電流 Iin0‧‧‧Incoming current / input current

Iin1‧‧‧傳入電流/輸入電流 Iin1‧‧‧Incoming current/input current

Iin2‧‧‧傳入電流/輸入電流 Iin2‧‧‧Incoming current/input current

Iin3‧‧‧傳入電流/輸入電流 Iin3‧‧‧Incoming current/input current

Iin4‧‧‧傳入電流/輸入電流 Iin4‧‧‧Incoming current / input current

Iin5‧‧‧傳入電流/輸入電流 Iin5‧‧‧Incoming current/input current

Iin7‧‧‧傳入電流/輸入電流 Iin7‧‧‧Incoming current/input current

Iout‧‧‧輸出電流/輸出/電流 Iout‧‧‧Output current / output / current

Iout0‧‧‧矩陣輸出 Iout0‧‧‧ matrix output

Iout1‧‧‧矩陣輸出 Iout1‧‧‧ matrix output

Iout2‧‧‧矩陣輸出 Iout2‧‧‧ matrix output

Iout3‧‧‧矩陣輸出 Iout3‧‧‧ matrix output

Iout4‧‧‧矩陣輸出 Iout4‧‧‧ matrix output

P1‧‧‧激活功能 P1‧‧‧ activation function

P2‧‧‧激活功能 P2‧‧‧ activation function

S‧‧‧共用的源極區 S‧‧‧shared source area

S1‧‧‧特徵圖譜 S1‧‧‧ feature map

S2‧‧‧特徵圖譜 S2‧‧‧ feature map

S3‧‧‧輸出 S3‧‧‧ output

V/I Conv 42‧‧‧電壓-電流轉換器 V/I Conv 42‧‧‧Voltage-to-Current Converter

V/I Conv 48‧‧‧電壓-電流轉換器 V/I Conv 48‧‧‧Voltage-to-Current Converter

VComp 46‧‧‧電壓比較器 VComp 46‧‧‧Voltage Comparator

Vin‧‧‧輸入電壓 Vin‧‧‧Input voltage

Vin0‧‧‧矩陣輸入/輸入 Vin0‧‧‧Matrix Input/Input

Vin1‧‧‧矩陣輸入 Vin1‧‧‧ matrix input

Vin2‧‧‧矩陣輸入 Vin2‧‧‧ Matrix Input

Vin3‧‧‧矩陣輸入 Vin3‧‧‧ Matrix Input

Vin4‧‧‧矩陣輸入 Vin4‧‧‧ Matrix Input

Vin5‧‧‧矩陣輸入 Vin5‧‧‧ Matrix Input

Vin6‧‧‧矩陣輸入 Vin6‧‧‧ Matrix Input

Vin7‧‧‧矩陣輸入 Vin7‧‧‧ matrix input

WL‧‧‧選擇閘線 WL‧‧‧Selected brake line

圖1是例示人工神經網路之圖。 FIG. 1 is a diagram illustrating an artificial neural network.

圖2是習知的2閘非揮發性記憶體單元的側視截面圖。 2 is a side cross-sectional view of a conventional 2-gate non-volatile memory cell.

圖3是例示圖2之記憶體單元的習知陣列架構的圖。 3 is a diagram illustrating a conventional array architecture of the memory cell of FIG. 2.

圖4是習知的2閘非揮發性記憶體單元的側視截面圖。 4 is a side cross-sectional view of a conventional 2-gate non-volatile memory unit.

圖5是例示圖4之記憶體單元的習知陣列架構的圖。 FIG. 5 is a diagram illustrating a conventional array architecture of the memory cell of FIG. 4.

圖6是習知的4閘非揮發性記憶體單元的側視截面圖。 Figure 6 is a side cross-sectional view of a conventional 4-gate non-volatile memory unit.

圖7是例示圖6之記憶體單元的習知陣列架構的圖。 7 is a diagram illustrating a conventional array architecture of the memory cell of FIG. 6.

圖8A是例示均勻間隔的神經網路權重位準分配的圖。 Figure 8A is a diagram illustrating a uniformly spaced neural network weight level assignment.

圖8B是例示不均勻間隔的神經網路權重位準分配的圖。 Figure 8B is a diagram illustrating neural network weight level assignments with uneven spacing.

圖9是例示雙向調諧演算法的流程圖。 Figure 9 is a flow chart illustrating a two-way tuning algorithm.

圖10是例示使用電流比較進行權重映射的方塊圖。 FIG. 10 is a block diagram illustrating weight mapping using current comparison.

圖11是例示使用電壓比較進行權重映射的方塊圖。 FIG. 11 is a block diagram illustrating weight mapping using voltage comparison.

圖12是例示使用非揮發性記憶體陣列之例示性神經網路的不同位準的圖。 Figure 12 is a diagram illustrating different levels of an exemplary neural network using a non-volatile memory array.

圖13是例示向量乘法器矩陣的方塊圖。 Figure 13 is a block diagram illustrating a vector multiplier matrix.

圖14是例示向量乘法器矩陣之各種位準的方塊圖。 Figure 14 is a block diagram illustrating various levels of a vector multiplier matrix.

圖15至圖16是例示四閘記憶體單元之陣列的第一架構的示意圖。 15 to 16 are schematic views illustrating a first architecture of an array of four-gate memory cells.

圖17至圖18是例示四閘記憶體單元之陣列的第二架構的示意圖。 17 to 18 are schematic views illustrating a second architecture of an array of four-gate memory cells.

圖19是例示四閘記憶體單元之陣列的第三架構的示意圖。 19 is a schematic diagram illustrating a third architecture of an array of four-gate memory cells.

圖20是例示四閘記憶體單元之陣列的第四架構的示意圖。 20 is a schematic diagram illustrating a fourth architecture of an array of four-gate memory cells.

圖21是例示四閘記憶體單元之陣列的第五架構的示意圖。 21 is a schematic diagram illustrating a fifth architecture of an array of four-gate memory cells.

圖22是例示四閘記憶體單元之陣列的第六架構的示意圖。 Figure 22 is a schematic diagram illustrating a sixth architecture of an array of four-gate memory cells.

圖23是例示二閘記憶體單元之陣列的第一架構的示意圖。 23 is a schematic diagram illustrating a first architecture of an array of two-gate memory cells.

圖24是例示二閘記憶體單元之陣列的第二架構的示意圖。 24 is a schematic diagram illustrating a second architecture of an array of two-gate memory cells.

圖25是例示電流-電壓對數轉換器的圖。 Fig. 25 is a diagram illustrating a current-voltage logarithmic converter.

圖26是例示電壓-電流對數轉換器的圖。 Fig. 26 is a diagram illustrating a voltage-current logarithmic converter.

圖27是例示參考Gnd的電流求和器的圖。 Figure 27 is a diagram illustrating a current summer with reference to Gnd.

圖28是例示參考Vdd的電流求和器的圖。 28 is a diagram illustrating a current summer with reference to Vdd.

圖29是例示非揮發性記憶體陣列之N2個神經網輸入的使用的圖。 29 is a diagram illustrating the use of N 2 neural network inputs of a non-volatile memory array.

圖30是例示非揮發性記憶體陣列之N2個神經網輸入的使用的圖。 Figure 30 is a diagram illustrating the use of N 2 neural network inputs of a non-volatile memory array.

圖31是例示具有週期性移位的輸入線之非揮發性記憶體陣列之神經網輸入的使用的圖。 31 is a diagram illustrating the use of a neural network input of a non-volatile memory array having periodically shifted input lines.

圖32是例示圖15的但具有週期性移位的輸入線之記憶體陣列架構的示意圖。 32 is a schematic diagram of a memory array architecture illustrating the input line of FIG. 15 but with periodic shifts.

圖33是例示圖20的但具有週期性移位的輸入線之記憶體陣列架構的示意圖。 33 is a schematic diagram of a memory array architecture illustrating the input line of FIG. 20 but having periodic shifts.

本發明之人工神經網路利用CMOS技術及非揮發性記憶體陣列之組合。數位非揮發性記憶體係熟知的。例如,美國專利第5,029,130號(「'130專利」)揭示分離閘非揮發性記憶體單元之一陣列,且係為所有目的以引用方式併入本文中。記憶體單元係顯示於圖2中。各記憶體單元10包括形成於一半導體基材12中之源極區14與 汲極區16,源極區14與汲極區16之間具有一通道區18。一浮閘20形成於通道區18之一第一部分上方且與該第一部分絕緣(且控制該第一部分的導電性),及形成於汲極區16的一部分上方。一控制閘22具有一第一部分22a及一第二部分22b,第一部分22a設置於通道區18之一第二部分上方且與該第二部分絕緣(且控制該第二部分的導電性),第二部分22b向上延伸至浮閘20上方。浮閘20及控制閘22係藉由一閘極氧化物26與基材12絕緣。 The artificial neural network of the present invention utilizes a combination of CMOS technology and a non-volatile memory array. Digital non-volatile memory systems are well known. An array of discrete gate non-volatile memory cells is disclosed, for example, in U.S. Patent No. 5,029,130 ("the '130 patent"). The memory cell system is shown in Figure 2. Each memory cell 10 includes a source region 14 formed in a semiconductor substrate 12 and The drain region 16, the source region 14 and the drain region 16 have a channel region 18. A float gate 20 is formed over and insulated from the first portion of the channel region 18 (and controls the conductivity of the first portion) and over a portion of the drain region 16. A control gate 22 has a first portion 22a and a second portion 22b. The first portion 22a is disposed above and insulated from the second portion of the channel region 18 (and controls the conductivity of the second portion). The two portions 22b extend upwardly above the float gate 20. The float gate 20 and the control gate 22 are insulated from the substrate 12 by a gate oxide 26.

藉由將一高正電壓置於控制閘22上來抹除記憶體單元(其中將電子自浮閘移除),其導致浮閘20上的電子藉由Fowler-Nordheim穿隧自浮閘20穿隧通過中間絕緣體24至控制閘22。 Erasing the memory cell (where electrons are removed from the floating gate) by placing a high positive voltage on the control gate 22 causes the electrons on the floating gate 20 to tunnel through the Fowler-Nordheim tunnel from the float gate 20 Through the intermediate insulator 24 to the control gate 22.

藉由將一正電壓置於控制閘22上以及將一正電壓置於汲極16上來程式化記憶體單元(其中將電子置於浮閘上)。電子流將自源極14朝汲極16流動。當電子抵達控制閘22與浮閘20之間的間隙時,電子將加速且變熱。由於來自浮閘20的吸引靜電力,該等變熱電子的一些將通過閘極氧化物26注入至浮閘20上。 The memory cell is programmed (where the electrons are placed on the floating gate) by placing a positive voltage on the control gate 22 and a positive voltage on the drain 16. The electron flow will flow from the source 14 towards the drain 16 . When electrons reach the gap between the control gate 22 and the float gate 20, the electrons will accelerate and become hot. Some of the changed hot electrons will be injected into the floating gate 20 through the gate oxide 26 due to the electrostatic attraction from the floating gate 20.

藉由將正讀取電壓置於汲極16及控制閘22上來讀取記憶體單元(其接通控制閘下方的通道區)。若浮閘20帶正電荷(亦即電子經抹除並正耦合至汲極16),則浮閘20下方的通道區部分亦經接通,且電流將跨通道區18流動,其係感測為經抹除或「1」狀態。若浮閘20帶負電荷(亦即以電子程式化),則浮閘20下方的通道區部分係大部分或完全斷開,且電流將不會跨通道區18流動(或將有少許流動),其係感測為經程式化或「0」狀態。 The memory cell (which turns on the channel region under the control gate) is read by placing the positive read voltage on the drain 16 and the control gate 22. If the floating gate 20 is positively charged (i.e., the electrons are erased and positively coupled to the drain 16), the portion of the channel region below the floating gate 20 is also turned on, and current will flow across the channel region 18, which senses Is erased or "1" status. If the floating gate 20 is negatively charged (i.e., electronically programmed), the portion of the channel region below the floating gate 20 is mostly or completely disconnected, and current will not flow across the channel region 18 (or there will be a slight flow) The system senses a stylized or "0" state.

記憶體陣列的架構係顯示於圖3中。記憶體單元10配 置成列及行。在各行中,記憶體單元以鏡像方式端對端地配置,以使其等形成為成對的記憶體單元,各對共享一共用的源極區14(S),且各組相鄰的記憶體單元對共享一共用的汲極區16(D)。用於任何給定之記憶體單元列的所有源極區14係藉由一源極線14a電氣連接在一起。用於任何給定之記憶體單元行的所有汲極區16係藉由一位元線16a電氣連接在一起。用於任何給定之記憶體單元列的所有控制閘22係藉由一控制閘線22a電氣連接在一起。因此,雖然記憶體單元可經個別地程式化及讀取,但記憶體單元之抹除係一列一列地執行(各列記憶體單元係藉由施加一高電壓在控制閘線22a上而一起抹除)。若欲抹除一特定的記憶體單元,則亦抹除相同列中的所有記憶體單元。 The architecture of the memory array is shown in Figure 3. Memory unit 10 Set into columns and rows. In each row, the memory cells are arranged end-to-end in a mirror image manner such that they are formed into a pair of memory cells, each pair sharing a common source region 14(S), and each group of adjacent memories The body unit pairs share a common bungee region 16(D). All of the source regions 14 for any given column of memory cells are electrically connected together by a source line 14a. All of the drain regions 16 for any given row of memory cells are electrically connected together by a single bit line 16a. All of the control gates 22 for any given array of memory cells are electrically connected together by a control gate 22a. Therefore, although the memory cells can be individually programmed and read, the erase of the memory cells is performed in columns and columns (the columns of memory cells are wiped together by applying a high voltage on the control gate 22a). except). If a particular memory cell is to be erased, all memory cells in the same column are also erased.

所屬技術領域中具有通常知識者了解源極與汲極可互換,其中浮閘可於源極而非汲極上方部分延伸,如圖4所示。圖5最佳地繪示對應的記憶體單元架構,其包括記憶體單元10、源極線14a、位元線16a、及控制閘線22a。如由圖式明顯可見者,相同列的記憶體單元10共享相同的源極線14a及相同的控制閘線22a,而相同行的所有單元之汲極區係電氣連接至相同的位元線16a。該陣列設計係針對數位應用而最佳化,並例如藉由分別施加1.6V及7.6V至經選取的控制閘線22a及源極線14a並將經選取的位元線16a接地,來允許經選取單元之個別程式化。藉由在非選取的位元線16a上施加一大於2伏的電壓並將其餘的線接地來避免干擾相同對中之非選取的記憶體單元。記憶體單元10無法個別地抹除,因為負責抹除的程序(電子自浮閘20至控制閘22之Fowler-Nordheim穿隧)僅受到汲極電壓(亦即,對在列方向中共享相同源極線14a的兩個相鄰單元而言唯一 可係不同的電壓)的微弱影響。 One of ordinary skill in the art understands that the source and drain are interchangeable, with the floating gate extending partially above the source rather than the drain, as shown in FIG. FIG. 5 best illustrates a corresponding memory cell architecture including a memory cell 10, a source line 14a, a bit line 16a, and a control gate 22a. As is apparent from the figures, memory cells 10 of the same column share the same source line 14a and the same control gate 22a, while the drain regions of all cells of the same row are electrically connected to the same bit line 16a. . The array design is optimized for digital applications and is allowed, for example, by applying 1.6V and 7.6V to selected control gates 22a and source lines 14a, respectively, and grounding the selected bit lines 16a. Select individual stylization of the unit. The non-selected memory cells of the same pair are avoided by applying a voltage greater than 2 volts across the unselected bit line 16a and grounding the remaining lines. The memory unit 10 cannot be erased individually because the program responsible for erasing (electronic Fowler-Nordheim tunneling from the float gate 20 to the control gate 22) is only subjected to the drain voltage (i.e., the same source is shared in the column direction). Unique to two adjacent cells of pole line 14a Can be a weak effect of different voltages).

具有多於兩個閘極之分離閘記憶體單元亦為已知。例如,具有源極區14、汲極區16、在通道區18之一第一部分上方的浮閘20、在通道區18之一第二部分上方的一選擇閘28、在浮閘20上方的一控制閘22、及在源極區14上方的一抹除閘30的記憶體單元係已知,如圖6中所示(見例如美國專利第6,747,310號,該案係為所有目的以引用方式併入本文中)。此處,除浮閘20外,所有閘極是非浮閘,意味其等電氣連接或可連接至電壓源。程式化係藉由變熱的電子自通道區18將其本身注入至浮閘20上來顯示。抹除係藉由自浮閘20至抹除閘30之電子穿隧來顯示。 Separate gate memory cells having more than two gates are also known. For example, having a source region 14, a drain region 16, a floating gate 20 above a first portion of the channel region 18, a selection gate 28 above a second portion of the channel region 18, and a gate above the floating gate 20 A control unit 22, and a memory unit of a wiper 30 above the source region 14, are known, as shown in Figure 6 (see, e.g., U.S. Patent No. 6,747,310, incorporated herein by reference herein In this article). Here, except for the float 20, all gates are non-floating, meaning that they are electrically connected or connectable to a voltage source. The stylization is displayed by injecting the heated electrons into the floating gate 20 from the channel region 18. The erase is indicated by electron tunneling from the floating gate 20 to the erase gate 30.

用於一四閘極記憶體單元陣列的架構可依圖7所示者來組態。在此實施例中,各水平選擇閘線28a將用於彼列記憶體單元的所有選擇閘28電氣連接在一起。各水平控制閘線22a將用於彼列記憶體單元的所有控制閘22電氣連接在一起。各水平源極線14a將用於共享源極區14之兩列記憶體單元的所有源極區14電氣連接在一起。各位元線16a將用於彼行記憶體單元的所有汲極區16電氣連接在一起。各抹除閘線30a將用於共享抹除閘30之兩列記憶體單元的所有抹除閘30電氣連接在一起。正如先前架構,個別的記憶體單元可經獨立地程式化及讀取。然而,無法個別地抹除單元。抹除係藉由將一高正電壓置於抹除閘線30a上來執行,其導致同時抹除共享相同抹除閘線30a的兩列記憶體單元。例示性操作電壓可包括在下方表1中者(在此實施例中,選擇閘線28a可稱為字線WL): The architecture for a four-gate memory cell array can be configured as shown in Figure 7. In this embodiment, each horizontal select gate 28a electrically connects all of the select gates 28 for the other memory cells. Each of the horizontal control gates 22a electrically connects all of the control gates 22 for the other memory cells. Each of the horizontal source lines 14a electrically connects all of the source regions 14 of the two columns of memory cells for sharing the source regions 14. Each of the bit lines 16a electrically connects all of the drain regions 16 for the memory cells of the row. Each erase gate 30a electrically connects all of the erase gates 30 for sharing the two columns of memory cells of the erase gate 30. As with the previous architecture, individual memory cells can be programmed and read independently. However, it is not possible to erase the unit individually. The erasing is performed by placing a high positive voltage on the erase gate line 30a, which results in simultaneous erasing of the two columns of memory cells sharing the same erase gate line 30a. An exemplary operating voltage can be included in Table 1 below (in this embodiment, select gate 28a can be referred to as word line WL):

為在神經網路中使用上述非揮發性記憶體陣列,進行兩個修改。第一,該等線經再組態以使得各記憶體單元可經個別地程式化、抹除及讀取,而不對陣列中之其他記憶體單元的記憶體狀態造成不利影響,如下面進一步解釋。第二,提供記憶體單元之連續(類比)程式化。具體而言,陣列中之各記憶體單元的記憶體狀態(亦即,浮閘上之電荷)可獨立地且在對其他記憶體單元造成最小干擾的情況下,自完全抹除狀態連續變化至完全程式化狀態,反之亦然。此意味單元儲存係類比的或至少可儲存許多離散值中之一者,從而允許記憶體陣列中之所有單元的極精確及個別調諧,且作出用於儲存及對神經網路之突觸權重進行微調調整的記憶體陣列思想。 Two modifications were made to use the above non-volatile memory array in the neural network. First, the lines are reconfigured such that each memory unit can be individually programmed, erased, and read without adversely affecting the memory state of other memory cells in the array, as explained further below. . Second, provide continuous (analog) stylization of memory cells. In particular, the memory state of each memory cell in the array (ie, the charge on the floating gate) can be continuously changed from the fully erased state to the independent and in the case of minimal interference to other memory cells. Fully stylized state and vice versa. This means that the unit storage is analogous or at least can store at least one of a number of discrete values, thereby allowing extremely accurate and individual tuning of all of the units in the memory array, and for making and storing synaptic weights for the neural network. Fine-tuning the idea of memory arrays.

記憶體單元程式化及儲存 Memory unit stylization and storage

如儲存於記憶體單元中之神經網路權重位準分配可如圖8A中所示者均勻地間隔,或如圖8B中所示者不均勻地間隔。非揮發性記憶體單元之程式化可使用諸如圖9中所示者的雙向調諧演算法來實施。Icell是經程式化的目標單元之讀取電流,且Itarget是理想地程式化單元時的所要讀取電流。讀取目標單元讀取電流Icell(步驟1),且將其與目標讀取電流Itarget相比較(步驟2)。若目標單元讀 取電流Icell大於目標讀取電流Itarget,則執行程式化調諧過程(步驟3)以增加浮閘上之電子的數目(其中查找表用以判定控制閘上之所要程式化電壓VCG)(步驟3a至3b),其可按需要重複(步驟3c)。若目標單元讀取電流Icell小於目標讀取電流Itarget,則執行抹除調諧過程(步驟4)以降低浮閘上之電子的數目(其中查找表用以判定抹除閘上之所要抹除電壓VEG)(步驟4a至4b),其可按需要重複(步驟4c)。若程式化調諧過程超過目標讀取電流,則執行抹除調諧過程(步驟3d且以步驟4a開始),反之亦然(步驟4d且以步驟3a開始),直至(在可接受delta值內)達成目標讀取電流。 The neural network weight level assignments as stored in the memory unit may be evenly spaced as shown in Figure 8A, or unevenly spaced as shown in Figure 8B. Stylization of non-volatile memory cells can be implemented using a two-way tuning algorithm such as that shown in FIG. Icell is the read current of the programmed target cell, and Itarget is the desired read current when the cell is ideally programmed. The read target cell reads the current Icell (step 1) and compares it with the target read current Itarget (step 2). If the target unit reads When the current Icell is greater than the target read current Itarget, a programmed tuning process (step 3) is performed to increase the number of electrons on the floating gate (where the lookup table is used to determine the desired programmed voltage VCG on the control gate) (step 3a to 3b), which can be repeated as needed (step 3c). If the target cell read current Icell is smaller than the target read current Itarget, an erase tuning process (step 4) is performed to reduce the number of electrons on the floating gate (where the lookup table is used to determine the desired erase voltage VEG on the erase gate) (Steps 4a to 4b), which can be repeated as needed (Step 4c). If the programmed tuning process exceeds the target read current, an erase tuning process is performed (step 3d and begins with step 4a) and vice versa (step 4d and begins with step 3a) until (within an acceptable delta value) The target reads the current.

相反,非揮發性記憶體單元之程式化可使用利用程式化調諧的單向調諧演算法來實施。利用此演算法,起初完全抹除記憶體單元,且然後執行圖9中之程式化調諧步驟3a至3c,直至目標單元之讀取電流抵達目標閾值。交替地,非揮發性記憶體單元之調諧可使用利用抹除調諧的單向調諧演算法來實施。在此途徑中,起初完全程式化記憶體單元,且然後執行圖9中之抹除調諧步驟4a至4c,直至目標單元之讀取電流抵達目標閾值。 In contrast, stylization of non-volatile memory cells can be implemented using a one-way tuning algorithm that utilizes programmatic tuning. With this algorithm, the memory cells are initially completely erased, and then the stylized tuning steps 3a through 3c in Figure 9 are performed until the read current of the target cell reaches the target threshold. Alternately, tuning of the non-volatile memory cells can be implemented using a one-way tuning algorithm that utilizes erase tuning. In this approach, the memory cells are initially fully programmed, and then the erase tuning steps 4a through 4c in Figure 9 are performed until the read current of the target cell reaches the target threshold.

圖10是例示使用電流比較進行權重映射的圖。權重數位位元(例如,用於各突觸之5位元權重,表示用於記憶體單元之目標數位權重)經輸入至數位-類比轉換器(DAC)40,從而將該等位元轉換成電壓Vout(例如,64電壓位準-5位元)。Vout藉由電壓-電流轉換器V/I Conv 42轉換成電流Iout(例如,64電流位準-5位元)。電流經供應至電流比較器IComp 44。程式化或抹除演算法致能輸入至記 憶體單元10(例如,抹除:使EG電壓遞增;或程式化:使CG電壓遞增)。記憶體單元輸出電流Icellout(亦即,來自讀取操作)經供應至電流比較器IComp 44。電流比較器IComp 44將記憶體單元電流Icellout與自權重數位位元導出之電流Iout相比較,以便產生指示儲存於記憶體單元10中之權重的訊號。 FIG. 10 is a diagram illustrating weight mapping using current comparison. The weight digits (eg, the 5-bit weight for each synapse, representing the target digit weight for the memory unit) are input to a digital-to-analog converter (DAC) 40, thereby converting the bits into Voltage Vout (eg, 64 voltage level - 5 bits). Vout is converted to a current Iout by a voltage-to-current converter V/I Conv 42 (eg, 64 current level - 5 bits). The current is supplied to the current comparator IComp 44. Stylized or erased algorithm enable input to remember Memory unit 10 (eg, erase: increments EG voltage; or stylize: increments CG voltage). The memory cell output current Icellout (ie, from a read operation) is supplied to the current comparator IComp 44. The current comparator IComp 44 compares the memory cell current Icellout with the current Iout derived from the weighting digit bit to generate a signal indicative of the weight stored in the memory unit 10.

圖11是例示使用電壓比較進行權重映射的圖。權重數位位元(例如,用於各突觸之5位元權重)經輸入至數位-類比轉換器(DAC)40,從而將該等位元轉換成電壓Vout(例如,64電壓位準-5位元)。Vout經供應至電壓比較器VComp 46。程式化或抹除演算法致能輸入至記憶體單元10(例如,抹除:使EG電壓遞增;或程式化:使CG電壓遞增)。記憶體單元輸出電流Icellout經供應至電流-電壓轉換器I/V Conv 48,以便轉換成電壓V2out(例如64電壓位準-5位元)。電壓V2out經供應至電壓比較器VComp 46。電壓比較器VComp 46將電壓Vout與V2out相比較,以產生指示儲存於記憶體單元10中之權重的訊號。 FIG. 11 is a diagram illustrating weight mapping using voltage comparison. The weight digits (eg, the 5-bit weight for each synapse) are input to a digital-to-analog converter (DAC) 40, thereby converting the bits to a voltage Vout (eg, 64 voltage level -5) Bit). Vout is supplied to the voltage comparator VComp 46. The stylized or erased algorithm enables input to the memory unit 10 (eg, erase: increments the EG voltage; or stylizes: increments the CG voltage). The memory cell output current Icellout is supplied to the current-to-voltage converter I/V Conv 48 for conversion to a voltage V2out (eg, 64 voltage level - 5 bits). The voltage V2out is supplied to the voltage comparator VComp 46. Voltage comparator VComp 46 compares voltage Vout with V2out to generate a signal indicative of the weight stored in memory unit 10.

採用非揮發性記憶體單元陣列之神經網路 Neural network using a non-volatile memory cell array

圖12在概念上例示使用非揮發性記憶體陣列之神經網路的非限制性實例。此實例使用非揮發性記憶體陣列神經網路用於面部辨識應用,但任何其他適當的應用可使用基於非揮發性記憶體陣列的神經網路來實施。S0是輸入,該輸入針對此實例是具有5位元精度的32×32像素RGB影像(亦即,三個32×32像素陣列,針對各顏色R、G及B一個,各像素為5位元精度)。自S0至C1之突觸CB1具 有兩個不同組權重及共享權重,且利用3×3像素重疊濾波器(核心)掃描輸入影像,使濾波器移位1像素(或依模型所規定,移位多於1像素)。具體而言,向突觸CB1提供針對影像之3×3部分中的9像素(亦即,稱為濾波器或核心)之值,藉此,使此等9個輸入值乘以適當權重,且在對彼乘積之輸出求和後,藉由CB1之第一神經元判定及提供單一輸出值,以便產生特徵圖譜層C1中之一者的像素。然後使3×3濾波器向右移位一個像素(亦即,在右邊添加三個像素行,且在左邊減少三個像素行),藉此,向突觸CB1提供此新定位的濾波器中之9個像素值,藉此使其乘以相同權重,且由相關神經元判定第二單一輸出值。繼續此過程直至3×3濾波器跨整個32×32像素影像掃描所有三個顏色及所有位元(精度值)。然後使用不同組權重重複該過程,以便產生不同特徵圖譜C1,直至已計算所有特徵圖譜層C1。 Figure 12 conceptually illustrates a non-limiting example of a neural network using a non-volatile memory array. This example uses a non-volatile memory array neural network for facial recognition applications, but any other suitable application can be implemented using a neural network based on a non-volatile memory array. S0 is an input, which is a 32×32 pixel RGB image with 5-bit precision for this example (ie, three 32×32 pixel arrays, one for each color R, G, and B, each pixel is 5 bits) Accuracy). Synaptic CB1 from S0 to C1 There are two different sets of weights and shared weights, and the input image is scanned using a 3 x 3 pixel overlay filter (core) to shift the filter by 1 pixel (or more than 1 pixel as specified by the model). Specifically, the synapse CB1 is provided with a value of 9 pixels (ie, referred to as a filter or a core) in a 3×3 portion of the image, whereby the nine input values are multiplied by an appropriate weight, and After summing the output of the product, a single output value is determined and provided by the first neuron of CB1 to produce a pixel of one of the feature map layers C1. The 3x3 filter is then shifted one pixel to the right (i.e., three pixel rows are added on the right side and three pixel rows are reduced on the left side), thereby providing this new positioning filter to synapse CB1. The 9 pixel values are thereby multiplied by the same weight and the second single output value is determined by the associated neuron. This process continues until the 3x3 filter scans all three colors and all bits (precision values) across the entire 32x32 pixel image. The process is then repeated using different sets of weights to produce different feature maps C1 until all feature map layers C1 have been calculated.

在本實例中,在C1處,存在各自具有30×30像素的16個特徵圖譜。各像素是自乘以輸入及核心所提取之新特徵像素,且因此各特徵圖譜是二維陣列,並且因此在此實例中,突觸CB1構成16層二維陣列(注意,本文中所參考之神經元層及陣列是邏輯關係,不一定是實體關係,亦即,陣列不一定以實體二維陣列取向)。16個特徵圖譜之各者由施加至濾波器掃描的十六個不同組突觸權重產生。C1特徵圖譜都可針對相同影像特徵之不同態樣,諸如邊界識別。例如,第一圖譜(使用第一權重組產生,經共享以用於用以產生此第一圖譜之所有掃描)可識別圓形邊緣,第二圖譜(使用不同於第一權重組之第二權重組產生)可識別矩形邊緣,或某些特徵之縱橫比,等等。 In this example, at C1, there are 16 feature maps each having 30 x 30 pixels. Each pixel is a new feature pixel that is self-multiplied with the input and core, and thus each feature map is a two-dimensional array, and thus in this example, the synapse CB1 constitutes a 16-layer two-dimensional array (note, reference is made herein) The neuron layer and array are logical relationships, not necessarily entity relationships, ie, the arrays are not necessarily oriented in a solid two-dimensional array). Each of the 16 feature maps is generated by sixteen different sets of synaptic weights applied to the filter scan. The C1 feature maps can all be directed to different aspects of the same image feature, such as boundary recognition. For example, a first map (generated using a first weight recombination, shared for all scans used to generate this first map) can identify a circular edge, and a second map (using a second weight different from the first weight recombination) The group produces) recognizable rectangular edges, or aspect ratios of certain features, and so on.

激活功能P1(合併)在自C1至S1之前施加,從而合併來自各特徵圖譜中之連續、不重疊2×2區的值。合併階段之目的在於使附近的位置達到平均(或亦可使用最大值功能),以便例如減小邊緣位置之相關性,且在進入下一個階段之前減小資料大小。在S1處,存在16個15×15特徵圖譜(亦即,各15×15像素之十六個不同陣列)。自S1至C2的CB2中之突觸及相關神經元利用4×4濾波器、1像素之濾波器移位來掃描S1中之圖譜。在C2處,存在22個12×12特徵圖譜。在自C2至S2之前施加激活功能P2(合併),從而合併來自各特徵圖譜中之連續不重疊2×2區的值。在S2處,存在22個6×6特徵圖譜。在自S2至C3的突觸CB3處施加激活功能,其中C3中之每一神經元連接至S2中之每一圖譜。在C3處,存在64個神經元。自C3至輸出S3之突觸CB4將S3完全連接至C3。S3處之輸出包括10個神經元,其中最高輸出神經元判定類別。此輸出可例如指示原始影像之內容的識別及分類。 The activation function P1 (merging) is applied before C1 to S1, thereby merging values from successive, non-overlapping 2x2 regions in each feature map. The purpose of the merge phase is to average the nearby locations (or the maximum function can also be used), for example to reduce the correlation of edge locations and to reduce the data size before proceeding to the next phase. At S1, there are 16 15x15 feature maps (i.e., sixteen different arrays of 15 x 15 pixels each). The synapse and associated neurons in CB2 from S1 to C2 use a 4x4 filter, 1 pixel filter shift to scan the map in S1. At C2, there are 22 12 x 12 feature maps. The activation function P2 (merging) is applied before C2 to S2, thereby merging values from successive non-overlapping 2x2 regions in each feature map. At S2, there are 22 6 x 6 feature maps. An activation function is applied at synapses CB3 from S2 to C3, wherein each of C3 is connected to each of S2. At C3, there are 64 neurons. The synapse CB4 from C3 to output S3 fully connects S3 to C3. The output at S3 includes 10 neurons, with the highest output neuron determining the category. This output may, for example, indicate the identification and classification of the content of the original image.

突觸之各位準使用非揮發性記憶體單元之陣列、或陣列之一部分來實施。圖13是向量矩陣乘法(VMM)陣列之方塊圖,該陣列包括非揮發性記憶體單元,且用作輸入層與下一層之間的突觸。具體而言,VMM 32包括非揮發性記憶體單元之陣列33、抹除閘及字線閘解碼器34、控制閘解碼器35、位元線解碼器36及源極線解碼器37,該等解碼器解碼用於記憶體陣列33之輸入。源極線解碼器37在此實例中亦解碼記憶體單元陣列之輸出。記憶體陣列服務兩個目的。第一,其儲存將由VMM使用之權重。第二,該記憶體陣列有效地使 輸入乘以儲存於記憶體陣列中之權重以產生輸出,該輸出將成為至下一層之輸入或至最終層之輸入。藉由執行乘法功能,記憶體陣列取消對單獨的乘法邏輯電路的需要,且亦是功率有效的。 Each of the synapses is implemented using an array of non-volatile memory cells, or a portion of an array. Figure 13 is a block diagram of a vector matrix multiplication (VMM) array that includes non-volatile memory cells and acts as a synapse between the input layer and the next layer. Specifically, the VMM 32 includes an array 33 of non-volatile memory cells, an erase gate and word line gate decoder 34, a control gate decoder 35, a bit line decoder 36, and a source line decoder 37, which The decoder decodes the input for the memory array 33. Source line decoder 37 also decodes the output of the memory cell array in this example. The memory array service serves two purposes. First, it stores the weights that will be used by the VMM. Second, the memory array effectively makes The input is multiplied by the weight stored in the memory array to produce an output that will be the input to the next layer or to the input of the final layer. By performing the multiply function, the memory array eliminates the need for a separate multiply logic circuit and is also power efficient.

記憶體陣列之輸出經供應至差分求和運算放大器38,該差分求和運算放大器將記憶體單元陣列之輸出相加以產生用於彼卷積之單一值。然後將相加的輸出值供應至校正輸出之激活功能電路39。校正的輸出值使特徵圖譜之元素成為下一層(例如上文描述中之C1),且然後施加至下一突觸以產生下一特徵圖譜層或最終層。因此,在此實例中,記憶體陣列構成複數個突觸(其等自先前的神經元層或自諸如影像資料庫之輸入層接收其輸入),且求和運算放大器38及激活功能電路39構成複數個神經元。 The output of the memory array is supplied to a differential summing operational amplifier 38 which adds the outputs of the memory cell array to produce a single value for the convolution. The added output value is then supplied to the activation function circuit 39 of the correction output. The corrected output value causes the element of the feature map to be the next layer (such as C1 in the above description) and then applied to the next synapse to produce the next feature map layer or final layer. Thus, in this example, the memory array constitutes a plurality of synapses (which are received from a previous neuron layer or from an input layer such as an image database), and the summing operational amplifier 38 and the activation function circuit 39 constitute A plurality of neurons.

圖14是VMM之各種位準的方塊圖。如圖14中所示,輸入藉由數位-類比轉換器31由數位-類比轉換而成,且經提供至輸入VMM 32a。由輸入VMM 32a產生之輸出經提供為至下一VMM(隱藏位準1)32b之輸入,從而隨後產生經提供為至下一VMM(隱藏位準2)32b等等之輸出。VMM 32之各種層起卷積神經網路(CNN)之不同層突觸及神經元的作用。各VMM可為獨立非揮發性記憶體陣列,或多個VMM可使用相同非揮發性記憶體陣列之不同部分,或多個VMM可使用相同非揮發性記憶體陣列之重疊部分。 Figure 14 is a block diagram of various levels of the VMM. As shown in FIG. 14, the input is converted by digital-to-analog by digital-to-analog converter 31 and provided to input VMM 32a. The output produced by input VMM 32a is provided as an input to the next VMM (hidden level 1) 32b, thereby subsequently producing an output that is provided to the next VMM (hidden level 2) 32b, and the like. The various layers of VMM 32 function as different layers of synapses and neurons in the Convolutional Neural Network (CNN). Each VMM can be an independent non-volatile memory array, or multiple VMMs can use different portions of the same non-volatile memory array, or multiple VMMs can use overlapping portions of the same non-volatile memory array.

圖15例示經配置成汲極求和矩陣乘法器的四閘記憶體單元之陣列(亦即,諸如圖6中所示者)。用於圖15之陣列的各種閘線及區線與圖7中之彼者相同(其中相同元件編號用於對應結構), 除了抹除閘線30a垂直地而非水平地運行(亦即,各抹除閘線30a將用於彼行記憶體單元之所有抹除閘30連接在一起),以使得各記憶體單元10可經獨立地程式化、抹除及讀取。在利用用於彼單元之適當權重值程式化記憶體單元之各者後,陣列充當汲極求和矩陣乘法器。矩陣輸入為Vin0...Vin7,且置於選擇閘線28a上。用於圖15之陣列的矩陣輸出Iout0...IoutN產生於位元線16a上。對於行中之所有單元而言,各輸出Iout是單元電流I乘儲存於單元中之權重W的和:Iout=Σ(Iij*Wij) Figure 15 illustrates an array of quad gate memory cells configured as a drain sum matrix multiplier (i.e., such as shown in Figure 6). The various gate lines and zone lines used for the array of Figure 15 are the same as those of Figure 7 (where the same component numbers are used for the corresponding structures), Except that the erase gate line 30a operates vertically rather than horizontally (i.e., each erase gate 30a connects all erase gates 30 for the memory cells of the row) so that each memory unit 10 can Stylized, erased, and read independently. After staging each of the memory cells with the appropriate weight values for the cell, the array acts as a buck summation matrix multiplier. The matrix input is Vin0...Vin7 and placed on select gate line 28a. The matrix outputs Iout0...IoutN for the array of Fig. 15 are generated on the bit line 16a. For all cells in the row, each output Iout is the sum of the cell current I times the weight W stored in the cell: Iout=Σ(Iij*Wij)

各記憶體單元(或記憶體單元對)充當具有權重值之單一突觸,該權重值表示為由儲存於彼行中之記憶體單元(或記憶體單元對)中的權重值之和所規定的輸出電流Iout。任何給定突觸之輸出呈電流形式。因此,第一階段之後的各後續VMM階段較佳地包括用於將來自先前VMM階段之傳入電流轉換成欲用作輸入電壓Vin之電壓的電路系統。圖16例示此種電流-電壓轉換電路系統之實例,該電路系統是經修改的記憶體單元列,該記憶體單元列將傳入電流Iin0...IinN對數轉換成輸入電壓Vin0..VinN。 Each memory unit (or pair of memory units) acts as a single synapse with a weight value that is defined by the sum of the weight values stored in the memory unit (or pair of memory units) in the other row. Output current Iout. The output of any given synapse is in the form of a current. Thus, each subsequent VMM stage after the first stage preferably includes circuitry for converting the incoming current from the previous VMM stage to the voltage to be used as the input voltage Vin. Figure 16 illustrates an example of such a current-to-voltage conversion circuit system that is a modified memory cell column that converts the incoming currents Iin0...IinN logarithmically into input voltage Vin0..VinN.

本文中所描述之記憶體單元經偏置在弱反轉中,Ids=Io * e(Vg-Vth)/kVt=w * Io * e(Vg)/kVt對於使用記憶體單元將輸入電流轉換成輸入電壓之I-V對數轉換器而言:Vg=k*Vt*log[Ids/wp*Io] 對於用作向量矩陣乘法器VMM之記憶體陣列而言,輸出電流為:Iout=wa * Io * e(Vg)/kVt,即Iout=(wa/wp)* Iin=W * Iin The memory cells described herein are biased in weak inversion, Ids = Io * e (Vg - Vth) / kVt = w * Io * e (Vg) / kVt for the use of memory cells to convert the input current into For an IV logarithmic converter of input voltage: Vg=k*Vt*log[Ids/wp*Io] For a memory array used as a vector matrix multiplier VMM, the output current is: Iout=wa * Io * e (Vg)/kVt , ie Iout=(wa/wp)* Iin=W * Iin

圖17及圖18例示經配置成汲極求和矩陣乘法器的四閘記憶體單元之陣列(亦即,諸如圖6中所示者)的另一組態。用於圖17及圖18之陣列的線與圖15及圖16之陣列中的彼者相同,除了源極線14a垂直地而非水平地運行(亦即,各源極線14a將用於彼行記憶體單元之所有源極區14連接在一起),且抹除閘線30a水平地而非垂直地運行(亦即,各抹除閘線30a將用於彼記憶體單元對之列的所有抹除閘30連接在一起),以使得各記憶體單元可經獨立地程式化、抹除及讀取。矩陣輸入Vin0...VinN保持於選擇閘線28a上,且矩陣輸出Iout0...IoutN保持於位元線16a上。 17 and 18 illustrate another configuration of an array of quad gate memory cells (i.e., such as shown in FIG. 6) configured as a drain sum matrix multiplier. The lines for the arrays of Figures 17 and 18 are the same as the ones of the arrays of Figures 15 and 16, except that the source line 14a operates vertically rather than horizontally (i.e., the source lines 14a will be used for each other). All of the source regions 14 of the row memory cells are connected together) and the erase gates 30a operate horizontally rather than vertically (i.e., each erase gate 30a will be used for all of the memory cell pairs) The erase gates 30 are connected together so that the memory cells can be independently programmed, erased, and read. The matrix inputs Vin0...VinN are held on the select gate line 28a, and the matrix outputs Iout0...IoutN are held on the bit line 16a.

圖19例示經配置成閘極耦合/源極求和矩陣乘法器的四閘記憶體單元之陣列(亦即,諸如圖6中所示者)的另一組態。用於圖19之陣列的線與圖15及圖16中之彼者相同,除了選擇閘線28a垂直地運行且該等選擇閘線中之兩者用於各行記憶體單元。具體而言,各行記憶體單元包括兩個選擇閘線:第一選擇閘線28a1,其將奇數列記憶體單元之所有選擇閘28連接在一起;及第二選擇閘線28a2,其將偶數列記憶體單元之所有選擇閘28連接在一起。 Figure 19 illustrates another configuration of an array of quad gate memory cells (i.e., such as shown in Figure 6) configured as a gate coupling/source sum matrix multiplier. The lines for the array of Figure 19 are the same as those of Figures 15 and 16, except that the select gate 28a operates vertically and both of the select gates are used for each row of memory cells. Specifically, each row of memory cells includes two select gates: a first select gate 28a1 that connects all of the select gates 28 of the odd column memory cells; and a second select gate 28a2 that will have even columns All of the selection gates 28 of the memory unit are connected together.

圖19之頂部及底部處的電路用來將輸入電流 Iin0...IinN對數轉換成輸入電壓Vin0..VinN。此圖式中所展示之矩陣輸入為Vin0...Vin5,且置於選擇閘線28a1及28a2上。具體而言,輸入Vin0置於選擇線28a1上用於行1中之奇數單元。Vin1置於選擇閘線28a2上用於行1中之偶數單元。Vin2置於選擇閘線28a1上用於行2中之奇數單元。Vin3置於選擇閘線28a2上用於行2中之偶數單元,等等。矩陣輸出Iout0...Iout3係提供於源極線14a上。位元線16a經偏置在固定偏置電壓VBLrd。對於彼列記憶體單元中之所有單元而言,各輸出Iout是單元電流I乘儲存於單元中之權重W的和。因此,對於此架構而言,各列記憶體單元充當具有權重值之單一突觸,該權重值表示為由儲存於彼列中之記憶體單元中的權重值之和所規定的輸出電流Iout。 The circuit at the top and bottom of Figure 19 is used to input current The Iin0...IinN logarithm is converted to the input voltage Vin0..VinN. The matrix inputs shown in this figure are Vin0...Vin5 and placed on select gates 28a1 and 28a2. Specifically, the input Vin0 is placed on the select line 28a1 for the odd cells in row 1. Vin1 is placed on select gate line 28a2 for the even units in row 1. Vin2 is placed on select gate line 28a1 for odd cells in row 2. Vin3 is placed on select gate line 28a2 for even units in row 2, and so on. The matrix outputs Iout0...Iout3 are provided on the source line 14a. Bit line 16a is biased at a fixed bias voltage VBLrd. For all cells in the memory cell, each output Iout is the sum of the cell current I times the weight W stored in the cell. Thus, for this architecture, each column of memory cells acts as a single synapse with a weight value represented as the output current Iout specified by the sum of the weight values stored in the memory cells in the column.

圖20例示經配置成閘極耦合/源極求和矩陣乘法器的四閘記憶體單元之陣列(亦即,諸如圖6中所示者)的另一組態。用於圖20之陣列的線與圖19中之彼者相同,除了位元線16垂直地運行且該等位元線中之兩者用於各行記憶體單元。具體而言,各行記憶體單元包括兩個位元線:第一位元線16a1,其將相鄰成對記憶體單元(兩個記憶體單元共享相同位元線觸點)之所有汲極區連接在一起;及第二位元線16a2,其將下一相鄰成對記憶體單元之所有汲極區連接在一起。矩陣輸入Vin0...VinN保持於選擇閘線28a1及28a2上,且矩陣輸出Iout0...IoutN保持於源極線14a上。所有第一位元線16a1之集合經偏置在偏置位準,例如,1.2v,且所有第二位元線16a2之集合經偏置在另一偏置位準,例如,0v。源極線14a經偏置在虛擬偏置位 準,例如,0.6v。對於共享共用的源極線14a之各對記憶體單元而言,輸出電流將為頂部單元減去底部單元之差分輸出。因此,各輸出Iout為此等差分輸出之和:Iout=Σ(Iiju*Wiju-Iijd*Wijd) Figure 20 illustrates another configuration of an array of quad gate memory cells (i.e., such as shown in Figure 6) configured as a gate coupling/source sum matrix multiplier. The lines for the array of Figure 20 are the same as the one of Figure 19 except that bit line 16 runs vertically and both of the bit lines are used for each row of memory cells. Specifically, each row of memory cells includes two bit lines: a first bit line 16a1 that will connect all of the drain regions of adjacent pairs of memory cells (two memory cells share the same bit line contact) Connected together; and a second bit line 16a2 that connects all of the drain regions of the next adjacent pair of memory cells. The matrix inputs Vin0...VinN are held on the select gate lines 28a1 and 28a2, and the matrix outputs Iout0...IoutN are held on the source line 14a. All sets of first bit lines 16a1 are biased at an offset level, for example, 1.2v, and all sets of second bit lines 16a2 are biased at another offset level, eg, 0v. Source line 14a is biased at a virtual offset Quasi, for example, 0.6v. For each pair of memory cells sharing the shared source line 14a, the output current will be the differential output of the top cell minus the bottom cell. Therefore, each output Iout is the sum of these differential outputs: Iout=Σ(Iiju*Wiju-Iijd*Wijd)

SL電壓~½ Vdd,~0.5v因此,對於此架構而言,各列成對的記憶體單元充當具有權重值之單一突觸,該權重值表示為輸出電流Iout,該輸出電流為由儲存於彼列成對的記憶體單元中之記憶體單元中的權重值所規定的差分輸出之和。 SL voltage ~1⁄2 Vdd, ~0.5v Therefore, for this architecture, each pair of memory cells acts as a single synapse with a weight value, which is expressed as the output current Iout, which is stored in The sum of the differential outputs specified by the weight values in the memory cells in the pair of memory cells.

圖21例示經配置成閘極耦合/源極求和矩陣乘法器的四閘記憶體單元之陣列(亦即,諸如圖6中所示者)的另一組態。用於圖21之陣列的線與圖20中之彼者相同,除了抹除閘30a水平地運行,並且控制閘線22a垂直地運行且該等控制閘線中之兩者用於各行記憶體單元。具體而言,各行記憶體單元包括兩個控制閘線:第一控制閘線22a1,其將奇數列記憶體單元之所有控制閘22a連接在一起;及第二控制閘線22a2,其將偶數列記憶體單元之所有控制閘22a連接在一起。矩陣輸入Vin0...VinN保持於選擇閘線28a1及28a2上,且矩陣輸出Iout0...IoutN保持於源極線14a上。 Figure 21 illustrates another configuration of an array of quad gate memory cells (i.e., such as shown in Figure 6) configured as a gate coupling/source sum matrix multiplier. The line for the array of Fig. 21 is the same as the one of Fig. 20 except that the erase gate 30a is operated horizontally, and the control gate 22a is operated vertically and both of the control gates are used for each row of memory cells. . Specifically, each row of memory cells includes two control gates: a first control gate 22a1 that connects all of the control gates 22a of the odd-numbered memory cells; and a second control gate 22a2 that has an even-numbered column All of the control gates 22a of the memory unit are connected together. The matrix inputs Vin0...VinN are held on the select gate lines 28a1 and 28a2, and the matrix outputs Iout0...IoutN are held on the source line 14a.

圖22例示經配置成源極求和矩陣乘法器的四閘記憶體單元之陣列(亦即,諸如圖6中所示者)的另一組態。用於圖22之陣列的線及輸入與圖17中之彼者相同。然而,代替將輸出提供於位元線16a上,將該等輸出提供於源極線14a上。矩陣輸入Vin0...VinN保持 於選擇閘線28a上。 Figure 22 illustrates another configuration of an array of quad gate memory cells (i.e., such as shown in Figure 6) configured as a source sum matrix multiplier. The lines and inputs for the array of Figure 22 are the same as those of Figure 17. However, instead of providing the output on the bit line 16a, the outputs are provided on the source line 14a. Matrix input Vin0...VinN remains On the selection gate line 28a.

圖23例示經配置成汲極求和矩陣乘法器的二閘記憶體單元之陣列(亦即,諸如圖1中所示者)的組態。用於圖23之陣列的線與圖5中之彼者相同,除了水平源極線14a已替換為垂直源極線14a。具體而言,各源極線14a連接至彼行記憶體單元中之所有源極區。矩陣輸入Vin0...VinN置於控制閘線22a上。矩陣輸出Iout0...IoutN產生於位元線16a上。對於行中之所有單元而言,各輸出Iout是單元電流I乘儲存於單元中之權重W的和。各行記憶體單元充當具有權重值之單一突觸,該權重值表示為由儲存於用於彼行之記憶體單元中的權重值之和所規定的輸出電流Iout。 23 illustrates a configuration of an array of two-gate memory cells (ie, such as those shown in FIG. 1) configured as a drain sum matrix multiplier. The line for the array of Figure 23 is the same as the one of Figure 5 except that the horizontal source line 14a has been replaced with a vertical source line 14a. Specifically, each of the source lines 14a is connected to all of the source regions in the other memory cells. The matrix inputs Vin0...VinN are placed on the control gate 22a. Matrix outputs Iout0...IoutN are generated on bit line 16a. For all cells in a row, each output Iout is the sum of the cell current I times the weight W stored in the cell. Each row of memory cells acts as a single synapse having a weight value expressed as an output current Iout defined by the sum of the weight values stored in the memory cells for the row.

圖24例示經配置成源極求和矩陣乘法器的二閘記憶體單元之陣列(亦即,諸如圖1中所示者)的組態。用於圖24之陣列的線與圖5中之彼者相同,除了控制閘線22a垂直地運行且該等控制閘線中之兩者用於各行記憶體單元。具體而言,各行記憶體單元包括兩個控制閘線:第一控制閘線22a1,其將奇數列記憶體單元之所有控制閘22a連接在一起;及第二控制閘線22a2,其將偶數列記憶體單元之所有控制閘22a連接在一起。 24 illustrates a configuration of an array of two-gate memory cells (ie, such as those shown in FIG. 1) configured as source sum matrix multipliers. The lines for the array of Figure 24 are the same as the one of Figure 5 except that the control gate 22a operates vertically and both of the control gates are used for each row of memory cells. Specifically, each row of memory cells includes two control gates: a first control gate 22a1 that connects all of the control gates 22a of the odd-numbered memory cells; and a second control gate 22a2 that has an even-numbered column All of the control gates 22a of the memory unit are connected together.

用於此組態之矩陣輸入為Vin0...VinN,且置於控制閘線22a1及22a2上。具體而言,輸入Vin0置於控制閘線22a1上用於行1中之奇數列單元。Vin1置於控制閘線22a2上用於行1中之偶數列單元。Vin2置於控制閘線22a1上用於行2中之奇數列單元。Vin3置於控制閘線22a2上用於行2中之偶數列單元,等等。矩陣輸出Iout0...IoutN產生於源極線14a上。對於共享共用的源極線14a之各對記憶體單元而言,輸出電流將為頂部單元減去底部單元之差分輸 出。因此,對於此架構而言,各列成對的記憶體單元充當具有權重值之單一突觸,該權重值表示為輸出電流Iout,該輸出電流為由儲存於彼列成對的記憶體單元中之記憶體單元中的權重值所規定之差分輸出的和。 The matrix inputs for this configuration are Vin0...VinN and placed on control gates 22a1 and 22a2. Specifically, the input Vin0 is placed on the control gate 22a1 for the odd column cells in row 1. Vin1 is placed on control gate 22a2 for the even column elements in row 1. Vin2 is placed on the control gate 22a1 for the odd column elements in row 2. Vin3 is placed on control gate 22a2 for even column cells in row 2, and so on. The matrix outputs Iout0...IoutN are generated on the source line 14a. For each pair of memory cells sharing the shared source line 14a, the output current will be the differential input of the top cell minus the bottom cell. Out. Therefore, for this architecture, each pair of memory cells acts as a single synapse with a weight value, which is represented as an output current Iout, which is stored in a pair of memory cells. The sum of the differential outputs specified by the weight values in the memory cells.

用於圖15至圖16、圖19及圖20之實施例的例示性操作電壓包括: Exemplary operating voltages for the embodiments of Figures 15-16, 19, and 20 include:

近似數值包括: Approximate values include:

用於圖17至圖18及圖22之實施例的例示性操作電壓包括: Exemplary operating voltages for the embodiments of Figures 17-18 and 22 include:

近似數值包括: Approximate values include:

圖25例示用以與本發明一起使用之例示性電流-電壓對數轉換器50(WL=選擇閘線,CG=控制閘線,EG=抹除閘線)。記憶體經偏置在弱反轉區中,Ids=Io * e(Vg-Vth)/kVt圖26例示用以與本發明一起使用之例示性電壓-電流對數轉換器52。記憶體經偏置在弱反轉 區中。圖27例示用以與本發明一起使用之參考Gnd的電流求和器54。圖28在下面例示用以與本發明一起使用之參考Vdd的電流求和器56。負載之實例包括二極體、非揮發性記憶體單元、及電阻器。 Figure 25 illustrates an exemplary current-voltage logarithmic converter 50 (WL = select gate line, CG = control gate line, EG = erase gate line) for use with the present invention. The memory is biased in the weak reversal zone, Ids = Io * e (Vg - Vth) / kVt . Figure 26 illustrates an exemplary voltage-current logarithmic converter 52 for use with the present invention. The memory is biased in the weak reversal zone. Figure 27 illustrates a current summer 54 of reference Gnd for use with the present invention. Figure 28 below illustrates a current summer 56 of reference Vdd for use with the present invention. Examples of loads include diodes, non-volatile memory cells, and resistors.

上述記憶體陣列組態實施前饋分類引擎。藉由將「權重」值儲存於記憶體單元中(產生突觸陣列)來完成訓練,其意味已修改個別單元之亞閾值斜率因子。神經元藉由對突觸之輸出求和及取決於神經元閾值而發射或不發射(亦即,作出決定)來實施。 The above memory array configuration implements a feedforward classification engine. Training is accomplished by storing the "weight" value in a memory unit (generating a synaptic array), which means that the sub-threshold slope factor of the individual unit has been modified. Neurons are implemented by summing the outputs of the synapses and depending on the neuron threshold to emit or not to emit (ie, make a decision).

下列步驟可用以處理輸入電流IE(例如,輸入電流直接來自特徵計算之輸出,以用於影像辨識): The following steps can be used to process the input current I E (for example, the input current is directly from the output of the feature calculation for image recognition):

步驟1-轉換成對數標度,以便更容易利用非揮發性記憶體處理。 Step 1 - Convert to a logarithmic scale to make it easier to take advantage of non-volatile memory processing.

‧使用雙極電晶體輸入電流至電壓轉換。雙極電晶體之偏置電壓VBE與射極電流具有對數關係。 ‧ Use bipolar transistor input current to voltage conversion. The bias voltage V BE of the bipolar transistor has a logarithmic relationship with the emitter current.

‧VBE=a*lnIE-b → VBE lnIE ‧VBE=a*lnI E -b → V BE lnI E

- 其中a(比率)及b(偏置或偏移)是常數 - where a (ratio) and b (offset or offset) are constant

‧VBE電壓經產生以使得記憶體單元將在亞閾值區中操作。 The VBE voltage is generated such that the memory cells will operate in the sub-threshold region.

步驟2-將所產生之偏置電壓VBE施加至字線(亞閾值區中)。 Step 2 - Apply the generated bias voltage VBE to the word line (in the sub-threshold region).

‧CMOS電晶體之輸出電流I汲極與輸入電壓(VGS)、熱電壓(UT)及kappa(k=Cox/(Cox+Cdep))具有指數關係,其中Cox及Cdep與浮閘上之電荷線性相關。 ‧ CMOS transistor output current I has an exponential relationship with input voltage (V GS ), thermal voltage (U T ) and kappa (k = C ox / (C ox + C dep )), where C ox and C dep Linearly related to the charge on the floating gate.

‧I汲極 Exp(kVBE/UT),或 ‧I Drain Exp(kV BE /U T ), or

‧lnI汲極 kVBE/UT ‧LnI Drain kV BE /U T

‧對數I汲極與多個VBE及浮閘上之電荷(與kappa相關)具有線 性關係,其中UT在給定溫度下係恆定的。 ‧ The logarithmic I has a linear relationship with multiple V BEs and the charge on the floating gate (related to kappa), where U T is constant at a given temperature.

‧針對突觸,存在輸出=輸入*權重關係。 ‧ For synapses, there is an output = input * weight relationship.

單元之各者的輸出(I汲極)可在讀取模式中綁在一起,以便將陣列或陣列之區段中的各突觸之值相加。一旦已將I汲極相加,其可經饋送至電流比較器中,且取決於對單一感知神經網路之比較輸出「邏輯」0或1。上文描述一個感知(一個區段)。來自各感知之輸出可經饋送至用於多個感知之下一組區段。 The outputs of each of the cells (I- poles ) can be tied together in a read mode to add the values of the synapses in the array or segment of the array. Once the I- thole has been added, it can be fed into the current comparator and output "logic" 0 or 1 depending on the comparison to a single perceptual neural network. The above describes a perception (a segment). The output from each perception can be fed to a set of segments for multiple perceptions.

在基於記憶體之卷積神經網路中,一組輸入需要乘以某些權重以便產生隱蔽層或輸出層所要的結果。如上文所解釋,一種技術是使用M×M濾波器(核心)掃描前述影像(例如N×N矩陣),該濾波器在水平及垂直方向中跨影像移位X像素。像素之掃描可至少部分地同時進行,只要存在至記憶體陣列之足夠的輸入。例如,如圖29中所示,M=6之濾波器大小(亦即,36像素之6×6陣列)可用以使用X=2之移位來掃描N×N影像陣列。在彼實例中,向至N2個輸入之記憶體陣列的第一6個輸入提供濾波器中之第一列六個像素。然後,向N2個輸入之第二N個輸入中的第一6個輸入提供濾波器中之第二列六個像素,等等。此在圖29中之圖形的第一列中表示,其中點表示用於乘以上文所闡述之輸入的儲存於記憶體陣列中之權重。然後,濾波器向右移位兩個像素,且向第一N個輸入之第三至第八輸入提供經移位濾波器中之第一列六個像素,向第二N個輸入之第三至第八輸入提供第二列六個像素,等等。一旦濾波器一直移位至影像之右側,則濾波器複位回到左側,但向下移位兩個像素,其中再次重複該過程,直至 掃描到整個N×N影像。各組水平移位的掃描可由梯形形狀表示,該等梯形形狀展示N2個記憶體陣列輸入中之哪一個具備用於相乘的資料。 In a memory-based convolutional neural network, a set of inputs needs to be multiplied by certain weights to produce the desired result of the hidden layer or the output layer. As explained above, one technique is to scan the aforementioned image (e.g., an N x N matrix) using an M x M filter (core) that shifts X pixels across the image in horizontal and vertical directions. The scanning of the pixels can be performed at least partially simultaneously as long as there is sufficient input to the memory array. For example, as shown in FIG. 29, a filter size of M=6 (i.e., a 6 x 6 array of 36 pixels) can be used to scan an N x N image array using a shift of X=2. In the example, the first six inputs of the memory array to N 2 inputs provide a first column of six pixels in the filter. Then, the filter is provided in the second column of six pixels to the N 2 inputs of the second input of the N first input 6, and the like. This is shown in the first column of the graph in Figure 29, where the dots represent the weights stored in the memory array for multiplication of the inputs set forth above. Then, the filter is shifted to the right by two pixels, and the third to eighth inputs of the first N inputs are provided with a first column of six pixels in the shift filter, and a third of the second N inputs The eighth input provides the second column of six pixels, and so on. Once the filter has been shifted to the right of the image, the filter is reset back to the left, but shifted down by two pixels, where the process is repeated again until the entire N x N image is scanned. The scanning of each group of horizontal shifts can be represented by a trapezoidal shape that shows which of the N 2 memory array inputs has data for multiplication.

因此,使用兩個像素在掃描之間的移位的N×N影像陣列及6×6之濾波器大小之掃描需要N2個輸入及((N-4)/2))2列。圖30圖形展示指示記憶體陣列中之權重如何經儲存以用於濾波器掃描的梯形形狀。各列陰影區域表示在一組水平掃描期間施加至輸入之權重。箭頭指示記憶體陣列之線性輸入線(例如,圖15中之輸入線28a,其接收以線性方式跨記憶體陣列一直延伸之輸入資料,其各者始終訪問相同記憶體單元列;在圖19之陣列的情況下,輸入線之各者始終訪問相同記憶體單元行)。白色區域指示沒有資料正供應至輸入的地方。因此,白色區域指示對記憶體單元陣列之無效使用。 Therefore, an N x N image array using a shift of two pixels between scans and a 6×6 filter size scan requires N 2 inputs and ((N-4)/2)) 2 columns. Figure 30 graphically illustrates a trapezoidal shape indicating how the weights in the memory array are stored for filter scanning. The shaded areas of each column represent the weights applied to the input during a set of horizontal scans. The arrows indicate linear input lines of the memory array (eg, input line 28a in Figure 15 that receives input data that extends in a linear fashion across the memory array, each of which always accesses the same memory cell column; In the case of an array, each of the input lines always access the same memory cell row). The white area indicates that no data is being supplied to the input. Therefore, the white area indicates an invalid use of the memory cell array.

藉由再組配記憶體陣列,如圖31中所示,可增加效率,及減少輸入之總數目。具體而言,記憶體陣列之輸入線週期性地移位至另一列或行,從而減少陣列之未使用部分,且因此減少需要執行掃瞄之陣列上方的重複輸入線之數目。具體而言,在其中移位X=2之實例的情況下,箭頭指示各輸入線週期性地移過兩列或兩行,從而將使用間隔寬的記憶體單元之梯形形狀轉換成間隔緊密的記憶體單元使用的矩形形狀。雖然線束需要記憶體單元部分之間的額外空間來實施此移位,但極大地減少了記憶體單元陣列中所需輸入之數目(僅5n+6)。 By reassembling the memory array, as shown in Figure 31, efficiency can be increased and the total number of inputs reduced. In particular, the input lines of the memory array are periodically shifted to another column or row, thereby reducing the unused portion of the array and thus reducing the number of repeated input lines above the array where scanning is required. Specifically, in the case where the example of shifting X=2, the arrow indicates that each input line periodically moves through two columns or two rows, thereby converting the trapezoidal shape of the memory cell using a wide interval into closely spaced. The rectangular shape used by the memory unit. Although the harness requires additional space between the memory cell portions to implement this shift, the number of inputs required in the memory cell array is greatly reduced (only 5n+6).

圖32例示圖15之陣列,但其中用於線28a之兩列的週期性移位用作輸入線。用於輸入線之成列的週期性移位可在圖17、圖 22及圖23之陣列中類似地實施。圖33例示圖20之陣列,但其中用於線28a1及28a2之兩行的週期性移位用作輸入線。用於輸入線之成行的週期性移位可在圖19、圖21及圖24之陣列中類似地實施。 Figure 32 illustrates the array of Figure 15, but with periodic shifts for the two columns of lines 28a used as input lines. The periodic shift for the column of input lines can be seen in Figure 17, Figure 22 and the array of Figure 23 are similarly implemented. Figure 33 illustrates the array of Figure 20, but with periodic shifts for the two rows of lines 28a1 and 28a2 used as input lines. The periodic shifts for the rows of input lines can be similarly implemented in the arrays of Figures 19, 21 and 24.

須了解本發明並未受限於上文所述以及本文所說明之(一或多個)實施例,且涵括落在任一項申請專利範圍之範疇內的任一變體或全部變體。例如,本文中對本發明的引述並非意欲用以限制任何申請專利範圍或申請專利範圍用語之範疇,而僅是用以對可由一或多項請求項所涵蓋的一或多種技術特徵作出引述。上文描述之材料、程序及數值實例僅為例示性,且不應視為對申請專利範圍之限制。單一材料層可形成為多個具有此類或類似材料之層,且反之亦然。雖然各記憶體單元陣列之輸出在發送至下一神經元層之前藉由濾波器冷凝操縱,但它們不需要如此。 It is to be understood that the invention is not limited to the embodiment(s) described above and described herein, and encompasses any variant or all variants falling within the scope of any one of the claims. For example, the description of the present invention is not intended to limit the scope of the claims or the scope of the claims, but only to recite one or more technical features that may be covered by one or more claims. The above examples of materials, procedures and numerical examples are illustrative only and should not be considered as limiting the scope of the claims. A single material layer can be formed into a plurality of layers having such or similar materials, and vice versa. Although the output of each memory cell array is manipulated by filter condensation before being sent to the next neuron layer, they need not be.

應注意的是,如本文中所使用,「在...上方(over)」及「在...上(on)」之用語皆含括性地包括了「直接在...之上(directly on)」(無居中的材料、元件或間隔設置於其間)及「間接在...之上(indirectly on)」(有居中的材料、元件或間隔設置於其間)的含意。同樣地,「相鄰的(adjacent)」一詞包括了「直接相鄰的」(無居中的材料、元件或間隔設置於其間)及「間接相鄰的」(有居中的材料、元件或間隔設置於其間)的含意,「安裝於(mounted to)」一詞則包括了「直接安裝於(directly mounted to)」(無居中的材料、元件或間隔設置於其間)及「間接安裝於(indirectly mounted to)」(有居中的材料、元件或間隔設置於其間)的含意,以及「電耦接(electrically coupled)」一詞則包括了「直接電耦接(directly electrically coupled to)」(無居中的材料或元件於其間將各元件電性相連接)及「間接電耦接(indirectly electrically coupled to)」(有居中的材料或元件於其間將各元件電性相連接)的含意。例如,「在一基材上方」形成一元件可包括直接在基材上形成元件而其間無居中的材料/元件存在,以及間接在基材上形成元件而其間有一或多個居中的材料/元件存在。 It should be noted that, as used herein, the terms "over" and "on" are used to include "directly on" ( Directly on) and the meaning of "indirectly on" (with in-between materials, components or intervals). Similarly, the word "adjacent" includes "directly adjacent" (without the centering material, component or spacing disposed therebetween) and "indirectly adjacent" (with centered material, component or spacing) In the meantime, the term "mounted to" includes "directly mounted to" (without the centering of materials, components or spaces) and "indirectly installed" (indirectly Mounted to) (with the centered material, component or spacing set in between), and "electrically coupled (electrically coupled The term "coupled" includes "directly electrically coupled to" (in the case of a non-centered material or component in which the components are electrically connected) and "indirectly electrically coupled to" (There is the meaning of having a centered material or component that electrically connects the components therebetween). For example, forming an element "on top of a substrate" can include the formation of elements directly on the substrate without the presence of materials/components in between, and the indirect formation of elements on the substrate with one or more centered materials/components therebetween. presence.

31‧‧‧數位-類比轉換器 31‧‧‧Digital-to-analog converter

32a~e‧‧‧輸入VMM 32a~e‧‧‧Enter VMM

Claims (31)

一種神經網路裝置,其包含:一第一複數個突觸,其經組態來接收一第一複數個輸入,且自該第一複數個輸入產生一第一複數個輸出,其中該第一複數個突觸包含:複數個記憶體單元,其中該等記憶體單元之各者包括形成於一半導體基材中之隔開的源極區及汲極區,一通道區在該源極區與該汲極區之間延伸;一浮閘,其設置於該通道區之一第一部分上方且與該第一部分絕緣;及一非浮閘,其設置於該通道區之一第二部分上方且與該第二部分絕緣;該複數個記憶體單元之各者經組態來儲存對應於該浮閘上之數個電子的一權重值;該複數個記憶體單元經組態來使該第一複數個輸入乘以所儲存的該等權重值,以產生該第一複數個輸出;一第一複數個神經元,其經組態來接收該第一複數個輸出。 A neural network device comprising: a first plurality of synapses configured to receive a first plurality of inputs, and generating a first plurality of outputs from the first plurality of inputs, wherein the first The plurality of synapses comprise: a plurality of memory cells, wherein each of the memory cells comprises a spaced apart source region and a drain region formed in a semiconductor substrate, and a channel region is in the source region Extending between the drain regions; a floating gate disposed above and insulated from the first portion of the channel region; and a non-floating gate disposed above the second portion of the channel region and The second portion is insulated; each of the plurality of memory cells configured to store a weight value corresponding to a plurality of electrons on the floating gate; the plurality of memory cells configured to cause the first plurality The inputs are multiplied by the stored weight values to produce the first plurality of outputs; a first plurality of neurons configured to receive the first plurality of outputs. 如請求項1之神經網路裝置,其中該第一複數個神經元經組態來基於該第一複數個輸出產生一第一複數個決定。 The neural network device of claim 1, wherein the first plurality of neurons are configured to generate a first plurality of decisions based on the first plurality of outputs. 如請求項2之神經網路裝置,其進一步包含:一第二複數個突觸,其經組態來基於該第一複數個決定接收一第二複數個輸入,且自該第二複數個輸入產生一第二複數個輸出,其中該第二複數個突觸包含: 複數個第二記憶體單元,其中該等第二記憶體單元之各者包括形成於該半導體基材中之隔開的第二源極區及第二汲極區,一第二通道區在該第二源極區與該第二汲極區之間延伸;一第二浮閘,其設置於該第二通道區之一第一部分上方且與該第一部分絕緣;及一第二非浮閘,其設置於該第二通道區之一第二部分上方且與該第二部分絕緣;該複數個第二記憶體單元之各者經組態來儲存對應於該第二浮閘上之數個電子的一第二權重值;該複數個第二記憶體單元經組態來使該第二複數個輸入乘以所儲存的該等第二權重值,以產生該第二複數個輸出;一第二複數個神經元,其經組態來接收該第二複數個輸出。 The neural network device of claim 2, further comprising: a second plurality of synapses configured to receive a second plurality of inputs based on the first plurality of decisions, and from the second plurality of inputs Generating a second plurality of outputs, wherein the second plurality of synapses comprises: a plurality of second memory cells, wherein each of the second memory cells includes a second source region and a second drain region formed in the semiconductor substrate, and a second channel region Extending between the second source region and the second drain region; a second floating gate disposed above the first portion of the second channel region and insulated from the first portion; and a second non-floating gate Provided above and insulated from the second portion of the second channel region; each of the plurality of second memory cells configured to store a plurality of electrons corresponding to the second floating gate a second weight value; the plurality of second memory units configured to multiply the second plurality of inputs by the stored second weight values to generate the second plurality of outputs; A plurality of neurons configured to receive the second plurality of outputs. 如請求項3之神經網路裝置,其中該第二複數個神經元經組態來基於該第二複數個輸出產生一第二複數個決定。 The neural network device of claim 3, wherein the second plurality of neurons are configured to generate a second plurality of decisions based on the second plurality of outputs. 如請求項1之神經網路裝置,其中該第一複數個突觸之該等記憶體單元之各者進一步包含:一第二非浮閘,其設置於該源極區上方且與該源極區絕緣;以及一第三非浮閘,其設置於該浮閘上方且與該浮閘絕緣。 The neural network device of claim 1, wherein each of the memory cells of the first plurality of synapses further comprises: a second non-floating gate disposed above the source region and the source Zone insulation; and a third non-floating gate disposed above the floating gate and insulated from the floating gate. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的該等第一非浮閘電氣連接在一起; 複數個第二線,其各將該等記憶體單元之該等行中之一者中的該等第二非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates in one of the columns of the memory cells; a plurality of second lines each electrically connecting the second non-floating gates of one of the rows of the memory cells; a plurality of third lines each of the memory cells The third non-floating gates of one of the columns are electrically connected together; a plurality of fourth lines each of the source regions of one of the columns of the memory cells Electrically connected together; a plurality of fifth lines each electrically connecting the one of the rows of the memory cells; wherein the first plurality of synapses are configured The first plurality of inputs are received on the plurality of first lines, and the first plurality of outputs are provided on the plurality of fifth lines. 如請求項6之神經網路裝置,其中對於該複數個第五線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一行該等記憶體單元中之所有該等記憶體單元而言,該第一複數個輸出中之該一者為穿過該等記憶體單元之電流乘以儲存於該等記憶體單元中之各別權重值的一和。 The neural network device of claim 6, wherein for each of the plurality of fifth lines, one of the first plurality of outputs is provided thereon for all of the memory cells in a row In the memory unit, the one of the first plurality of outputs is a sum of currents passing through the memory cells multiplied by respective weight values stored in the memory cells. 如請求項6之神經網路裝置,其進一步包含:電路系統,其用於在該複數個第一線上之該接收該第一複數個輸入之前,將該第一複數個輸入之電流對數轉換成電壓。 The neural network device of claim 6, further comprising: circuitry for converting the current logarithm of the first plurality of inputs to the first plurality of inputs before the receiving the first plurality of inputs on the plurality of first lines Voltage. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的該等第一非浮閘電氣連接在一起; 複數個第二線,其各將該等記憶體單元之該等列中之一者中的該等第二非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的該等源極區電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates in one of the columns of the memory cells; a plurality of second lines each electrically connecting the second non-floating gates of one of the columns of the memory cells; a plurality of third lines each of the memory cells The third non-floating gates of one of the columns are electrically connected together; a plurality of fourth lines each of the source regions of one of the rows of the memory cells Electrically connected together; a plurality of fifth lines each electrically connecting the one of the rows of the memory cells; wherein the first plurality of synapses are configured The first plurality of inputs are received on the plurality of first lines, and the first plurality of outputs are provided on the plurality of fifth lines. 如請求項9之神經網路裝置,其中對於該複數個第五線之各者而言,該第二複數個輸出中之一者係提供於其上,對於一行該等記憶體單元中之所有該等記憶體單元而言,該第二複數個輸出中之該一者為穿過該等記憶體單元之電流乘以儲存於該等記憶體單元中之各別權重值的一和。 The neural network device of claim 9, wherein for each of the plurality of fifth lines, one of the second plurality of outputs is provided thereon for all of the memory cells in a row In the memory unit, the one of the second plurality of outputs is a sum of currents passing through the memory cells multiplied by respective weight values stored in the memory cells. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之該等第一非浮閘電氣連接在一起; 複數個第三線,其各將該等記憶體單元之該等行中之一者中的該等第二非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第六線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates of the odd-numbered column memory cells in one of the rows of the memory cells; the plurality of second wires, each of the memory cells The first non-floating gates of the even-numbered column memory cells of one of the rows are electrically connected together; a plurality of third lines each electrically connecting the second non-floating gates of one of the rows of the memory cells; a plurality of fourth lines each of the memory cells The third non-floating gates of one of the columns are electrically connected together; a plurality of fifth lines each of the source regions of one of the columns of the memory cells Electrically connected together; a plurality of sixth lines each electrically connecting the one of the ones of the rows of the memory cells; wherein the first plurality of synapses are configured Receiving some of the first plurality of inputs on the plurality of first lines, and receiving the other of the first plurality of inputs on the plurality of second lines, and providing the plurality of fifth lines The first plurality of outputs. 如請求項11之神經網路裝置,其中對於該複數個第五線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一列該等記憶體單元中之所有該等記憶體單元而言,該第一複數個輸出中之該一者為穿過該等記憶體單元之電流乘以儲存於該等記憶體單元中之各別權重值的一和。 The neural network device of claim 11, wherein for each of the plurality of fifth lines, one of the first plurality of outputs is provided thereon for all of the columns of the memory cells In the memory unit, the one of the first plurality of outputs is a sum of currents passing through the memory cells multiplied by respective weight values stored in the memory cells. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之該等第一非浮閘電氣連接在一起; 複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等行中之一者中的該等第二非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第六線,其各將該等記憶體單元之該等行中之一者中的奇數汲極區電氣連接在一起;複數個第七線,其各將該等記憶體單元之該等行中之一者中的偶數汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates of the odd-numbered column memory cells of one of the rows of the memory cells; a plurality of second lines each electrically connecting the first non-floating gates of the even-numbered column memory cells of one of the rows of the memory cells; a plurality of third lines, each of which The second non-floating gates of one of the rows of the memory cells are electrically connected together; a plurality of fourth wires each of the ones of the memory cells The third non-floating gates are electrically connected together; a plurality of fifth lines each electrically connecting the source regions of one of the columns of the memory cells together; a six-wire, each of which electrically connects the odd-numbered drain regions of one of the rows of the memory cells; a plurality of seventh lines, each of the rows of the memory cells An even number of drain regions in one are electrically coupled together; wherein the first plurality of synapses are configured to receive some of the first plurality of inputs on the plurality of first lines, and in the plurality of Receiving the other of the first plurality of inputs on the second line, and providing the plurality of fifth lines A plurality of outputs. 如請求項13之神經網路裝置,其中對於該複數個第五線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一列該等記憶體單元中之所有記憶體單元對而言,該第一複數個輸出中之該一者為來自該等記憶體單元對之差分輸出的一和,且其中該等差分輸出之各者為穿過該記憶體單元對之電流乘以儲存於該記憶體單元對中之各別權重值之間的一差。 The neural network device of claim 13, wherein for each of the plurality of fifth lines, one of the first plurality of outputs is provided thereon for all of the columns of the memory cells The memory cell pair, the one of the first plurality of outputs being a sum of differential outputs from the pair of memory cells, and wherein each of the differential outputs is a pair passing through the memory cell The current is multiplied by a difference between the respective weight values stored in the pair of memory cells. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第二非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之該等第三非浮閘電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之該等第三非浮閘電氣連接在一起;複數個第六線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第七線,其各將該等記憶體單元之該等行中之一者中的奇數汲極區電氣連接在一起;複數個第八線,其各將該等記憶體單元之該等行中之一者中的偶數汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第六線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates of the odd-numbered column memory cells in one of the rows of the memory cells; the plurality of second wires, each of the memory cells The first non-floating gates of the even-numbered column memory cells of one of the rows are electrically connected together; the plurality of third wires, each of the ones of the columns of the memory cells Waiting for the second non-floating gates to be electrically connected together; a plurality of fourth lines each of the third non-floating gates of the odd-numbered column memory cells of one of the rows of the memory cells a plurality of fifth lines each electrically connecting the third non-floating gates of the even-numbered column memory cells of one of the rows of the memory cells; the plurality of sixth lines Electrically connecting the source regions of one of the columns of the memory cells Connected together; a plurality of seventh lines each electrically connecting the odd-numbered drain regions of one of the rows of the memory cells; a plurality of eighth lines each of the memory An even bungee region of one of the rows of cells is electrically coupled together; wherein the first plurality of synapses are configured to receive the first plurality of inputs on the plurality of first lines And receiving the other of the first plurality of inputs on the plurality of second lines, and providing the first plurality of outputs on the plurality of sixth lines. 如請求項15之神經網路裝置,其中對於該複數個第六線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一列該等記憶體單元中之所有記憶體單元對而言,該第一複數個輸出中之該一者為來自該等記憶體單元對之差分輸出的一和,且其中該等差分輸出之各者為穿過該記憶體單元對之電流乘以儲存於該記憶體單元對中之各別權重值之間的一差。 The neural network device of claim 15, wherein for each of the plurality of sixth lines, one of the first plurality of outputs is provided thereon for all of the columns of the memory cells The memory cell pair, the one of the first plurality of outputs being a sum of differential outputs from the pair of memory cells, and wherein each of the differential outputs is a pair passing through the memory cell The current is multiplied by a difference between the respective weight values stored in the pair of memory cells. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等列中之一者中的該等第二非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的該等源極區電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第四線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates of one of the columns of the memory cells together; a plurality of second wires each of the ones of the memory cells The second non-floating gates are electrically connected together; a plurality of third lines each electrically connecting the third non-floating gates in one of the columns of the memory cells; a fourth line electrically connecting the source regions of one of the rows of the memory cells together; a plurality of fifth lines, each of the memory cells The bungee regions of one of the rows are electrically connected together; wherein the first plurality of synapses are configured to receive the first plurality of inputs on the plurality of first lines, and at the plurality of The first plurality of outputs are provided on the four lines. 如請求項17之神經網路裝置,其中對於該複數個第四線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一行該等記憶 體單元中之所有該等記憶體單元而言,該第一複數個輸出中之該一者為穿過該等記憶體單元之電流乘以儲存於該等記憶體單元中之各別權重值的一和。 The neural network device of claim 17, wherein for each of the plurality of fourth lines, one of the first plurality of outputs is provided thereon, for a row of the memories For all of the memory cells in the volume unit, the one of the first plurality of outputs is the current through the memory cells multiplied by the respective weight values stored in the memory cells One and one. 如請求項1之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的該等源極區電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第三線上提供該第一複數個輸出。 The neural network device of claim 1, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates of one of the columns of the memory cells together; a plurality of second wires each of the ones of the memory cells The source regions are electrically connected together; a plurality of third lines each electrically connecting the drain regions of one of the rows of the memory cells; wherein the first plurality The synapse is configured to receive the first plurality of inputs on the plurality of first lines and to provide the first plurality of outputs on the plurality of third lines. 如請求項19之神經網路裝置,其中對於該複數個第三線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一行該等記憶體單元中之所有該等記憶體單元而言,該第一複數個輸出中之該一者為穿過該等記憶體單元之電流乘以儲存於該等記憶體單元中之各別權重值的一和。 The neural network device of claim 19, wherein for each of the plurality of third lines, one of the first plurality of outputs is provided thereon for all of the memory cells in a row In the case of a memory unit, the one of the first plurality of outputs is a sum of currents passing through the memory cells multiplied by respective weight values stored in the memory cells. 如請求項1之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之該等第一非浮閘電氣連接在一起; 複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第三線上提供該第一複數個輸出。 The neural network device of claim 1, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Electrically connecting the first non-floating gates of the odd-numbered column memory cells of one of the rows of the memory cells; a plurality of second lines each electrically connecting the first non-floating gates of the even-numbered column memory cells of one of the rows of the memory cells; a plurality of third lines, each of which The source regions of one of the columns of the memory cells are electrically connected together; a plurality of fourth wires, each of the ones of the rows of the memory cells The first bungee regions are electrically connected together; wherein the first plurality of synapses are configured to receive some of the first plurality of inputs on the plurality of first lines and receive the plurality of second lines on the plurality of first lines The other of the first plurality of inputs, and the first plurality of outputs are provided on the plurality of third lines. 如請求項21之神經網路裝置,其中對於該複數個第三線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一列該等記憶體單元中之所有記憶體單元對而言,該第一複數個輸出中之該一者為來自該等記憶體單元對之差分輸出的一和,且其中該等差分輸出之各者為穿過該記憶體單元對之電流乘以儲存於該記憶體單元對中之各別權重值之間的一差。 The neural network device of claim 21, wherein for each of the plurality of third lines, one of the first plurality of outputs is provided thereon for all of the memories in a column of the memory cells The body unit pair, the one of the first plurality of outputs being a sum of differential outputs from the pair of memory cells, and wherein each of the differential outputs is through the memory unit pair The current is multiplied by a difference between the respective weight values stored in the pair of memory cells. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的一些而非所有的該等第一非浮閘,與該等記憶體單元之該等列中之另一者中的一些而非所有的該等第一非浮閘電氣連接在一起; 複數個第二線,其各將該等記憶體單元之該等行中之一者中的該等第二非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of one of the columns of the memory cells, and some but not all of the other of the columns of the memory cells The first non-floating gates are electrically connected together; a plurality of second lines each electrically connecting the second non-floating gates of one of the rows of the memory cells; a plurality of third lines each of the memory cells The third non-floating gates of one of the columns are electrically connected together; a plurality of fourth lines each of the source regions of one of the columns of the memory cells Electrically connected together; a plurality of fifth lines each electrically connecting the one of the rows of the memory cells; wherein the first plurality of synapses are configured The first plurality of inputs are received on the plurality of first lines, and the first plurality of outputs are provided on the plurality of fifth lines. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的一些而非所有的該等第一非浮閘,與該等記憶體單元之該等列中之另一者中的一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等列中之一者中的該等第二非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的該等源極區電氣連接在一起; 複數個第五線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of one of the columns of the memory cells, and some but not all of the other of the columns of the memory cells The first non-floating gates are electrically connected together; a plurality of second wires each electrically connecting the second non-floating gates in one of the columns of the memory cells; a third line electrically connecting the third non-floating gates of one of the columns of the memory cells together; a plurality of fourth lines each of the memory cells The source regions of one of the rows are electrically connected together; a plurality of fifth lines each electrically connecting the one of the ones of the rows of memory cells together; wherein the first plurality of synapses are configured to be in the plurality of The first plurality of inputs are received on the first line, and the first plurality of outputs are provided on the plurality of fifth lines. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等行中之一者中的該等第二非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第六線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起; 其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of the odd-numbered memory cells of one of the rows of the memory cells, and the other of the rows of the memory cells Some, but not all, of the first non-floating gates of the odd-numbered memory cells are electrically connected together; a plurality of second lines, each of which is an even number of one of the rows of the memory cells Some but not all of the first non-floating gates of the column memory cells, and some but not all of the even-numbered column memory cells of the other of the rows of the memory cells Non-floating gates are electrically connected together; a plurality of third wires each electrically connecting the second non-floating gates of one of the rows of the memory cells; a plurality of fourth wires, The third non-floating electrical connection in one of the columns of the memory cells Connected together; a plurality of fifth lines each electrically connecting the source regions of one of the columns of the memory cells; a plurality of sixth lines, each of the memories The bungee regions of one of the rows of body units are electrically connected together; Wherein the first plurality of synapses are configured to receive some of the first plurality of inputs on the plurality of first lines, and receive the other of the first plurality of inputs on the plurality of second lines And providing the first plurality of outputs on the plurality of fifth lines. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等行中之一者中的該等第二非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第六線,其各將該等記憶體單元之該等行中之一者中的奇數汲極區電氣連接在一起; 複數個第七線,其各將該等記憶體單元之該等行中之一者中的偶數汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第五線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of the odd-numbered memory cells of one of the rows of the memory cells, and the other of the rows of the memory cells Some, but not all, of the first non-floating gates of the odd-numbered memory cells are electrically connected together; a plurality of second lines, each of which is an even number of one of the rows of the memory cells Some but not all of the first non-floating gates of the column memory cells, and some but not all of the even-numbered column memory cells of the other of the rows of the memory cells Non-floating gates are electrically connected together; a plurality of third wires each electrically connecting the second non-floating gates of one of the rows of the memory cells; a plurality of fourth wires, The third non-floating electrical connection in one of the columns of the memory cells Connected together; a plurality of fifth lines each electrically connecting the source regions of one of the columns of the memory cells; a plurality of sixth lines, each of the memories The odd bungee regions of one of the rows of body units are electrically connected together; a plurality of seventh lines each electrically connecting together even-numbered drain regions of one of the rows of memory cells; wherein the first plurality of synapses are configured to be in the plurality of One of the first plurality of inputs is received on a line, and the other of the first plurality of inputs is received on the plurality of second lines, and the first plurality of outputs are provided on the plurality of fifth lines. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第二非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之該等第三非浮閘電氣連接在一起;複數個第五線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之該等第三非浮閘電氣連接在一起; 複數個第六線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第七線,其各將該等記憶體單元之該等行中之一者中的奇數汲極區電氣連接在一起;複數個第八線,其各將該等記憶體單元之該等行中之一者中的偶數汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第六線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of the odd-numbered memory cells of one of the rows of the memory cells, and the other of the rows of the memory cells Some, but not all, of the first non-floating gates of the odd-numbered memory cells are electrically connected together; a plurality of second lines, each of which is an even number of one of the rows of the memory cells Some but not all of the first non-floating gates of the column memory cells, and some but not all of the even-numbered column memory cells of the other of the rows of the memory cells The non-floating gates are electrically connected together; a plurality of third wires each electrically connecting the second non-floating gates of one of the columns of the memory cells; a plurality of fourth wires, The odd column memory cells of one of the rows of the memory cells And the third non-floating gates are electrically connected together; the plurality of fifth lines each of the third non-floating gates of the even-numbered memory cells of one of the rows of the memory cells Together a plurality of sixth lines each electrically connecting the source regions of one of the columns of the memory cells; a plurality of seventh lines each of the memory cells An odd-numbered drain region of one of the rows is electrically connected together; a plurality of eighth wires each electrically connecting the even-numbered drain regions of one of the rows of the memory cells; Wherein the first plurality of synapses are configured to receive some of the first plurality of inputs on the plurality of first lines, and receive the other of the first plurality of inputs on the plurality of second lines And providing the first plurality of outputs on the plurality of sixth lines. 如請求項5之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的一些而非所有的該等第一非浮閘,與該等記憶體單元之該等列中之另一者中的一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等列中之一者中的該等第二非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等第三非浮閘電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的該等源極區電氣連接在一起; 複數個第五線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第四線上提供該第一複數個輸出。 The neural network device of claim 5, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of one of the columns of the memory cells, and some but not all of the other of the columns of the memory cells The first non-floating gates are electrically connected together; a plurality of second wires each electrically connecting the second non-floating gates in one of the columns of the memory cells; a third line electrically connecting the third non-floating gates of one of the columns of the memory cells together; a plurality of fourth lines each of the memory cells The source regions of one of the rows are electrically connected together; a plurality of fifth lines each electrically connecting the one of the ones of the rows of memory cells together; wherein the first plurality of synapses are configured to be in the plurality of The first plurality of inputs are received on the first line, and the first plurality of outputs are provided on the plurality of fourth lines. 如請求項1之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含:複數個第一線,其各將該等記憶體單元之該等列中之一者中的一些而非所有的該等第一非浮閘,與該等記憶體單元之該等列中之另一者中的一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的該等源極區電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入,且在該複數個第三線上提供該第一複數個輸出。 The neural network device of claim 1, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of which Some but not all of the first non-floating gates of one of the columns of the memory cells, and some but not all of the other of the columns of the memory cells The first non-floating gates are electrically connected together; a plurality of second lines each electrically connecting the source regions of one of the rows of the memory cells together; a three-wire, each of the one of the rows of the memory cells being electrically connected together; wherein the first plurality of synapses are configured to receive on the plurality of first lines The first plurality of inputs, and the first plurality of outputs are provided on the plurality of third lines. 如請求項19之神經網路裝置,其中對於該複數個第三線之各者而言,該第一複數個輸出中之一者係提供於其上,對於一行該等記憶體單元中之所有該等記憶體單元而言,該第一複數個輸出中之該一者為穿過該等記憶體單元之電流乘以儲存於該等記憶體單元中之各別權重值的一和。 The neural network device of claim 19, wherein for each of the plurality of third lines, one of the first plurality of outputs is provided thereon for all of the memory cells in a row In the case of a memory unit, the one of the first plurality of outputs is a sum of currents passing through the memory cells multiplied by respective weight values stored in the memory cells. 如請求項1之神經網路裝置,其中該第一複數個突觸之該等記憶體單元經配置成列及行,且其中該第一複數個突觸包含: 複數個第一線,其各將該等記憶體單元之該等行中之一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的奇數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第二線,其各將該等記憶體單元之該等行中之一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘,與該等記憶體單元之該等行中之另一者中的偶數列記憶體單元之一些而非所有的該等第一非浮閘電氣連接在一起;複數個第三線,其各將該等記憶體單元之該等列中之一者中的該等源極區電氣連接在一起;複數個第四線,其各將該等記憶體單元之該等行中之一者中的該等汲極區電氣連接在一起;其中該第一複數個突觸經組態來在該複數個第一線上接收該第一複數個輸入中之一些者,且在該複數個第二線上接收該第一複數個輸入中之其他者,且在該複數個第三線上提供該第一複數個輸出。 The neural network device of claim 1, wherein the memory cells of the first plurality of synapses are configured as columns and rows, and wherein the first plurality of synapses comprise: a plurality of first lines, each of the odd-numbered memory cells of one of the rows of the memory cells, and not all of the first non-floating gates, and the memory cells Some of the odd-numbered memory cells of the other of the rows, but not all of the first non-floating gates, are electrically connected together; a plurality of second lines, each of the memory cells Some of the even-numbered column memory cells in one of the rows, but not all of the first non-floating gates, and some of the even-numbered column memory cells in the other of the rows of the memory cells And not all of the first non-floating gates are electrically connected together; a plurality of third lines each electrically connecting the source regions of one of the columns of the memory cells together; a fourth line electrically connecting the one of the ones of the rows of the memory cells together; wherein the first plurality of synapses are configured to be in the plurality of Receiving some of the first plurality of inputs on a line and receiving on the plurality of second lines A first plurality of inputs of the other person, and providing the first plurality of outputs of the plurality of third lines.
TW106116163A 2016-05-17 2017-05-16 Neural network device TWI631517B (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201662337760P 2016-05-17 2016-05-17
US62/337,760 2016-05-17
US15/594,439 2017-05-12
??PCT/US17/32552 2017-05-12
PCT/US2017/032552 WO2017200883A1 (en) 2016-05-17 2017-05-12 Deep learning neural network classifier using non-volatile memory array
US15/594,439 US11308383B2 (en) 2016-05-17 2017-05-12 Deep learning neural network classifier using non-volatile memory array

Publications (2)

Publication Number Publication Date
TW201741943A true TW201741943A (en) 2017-12-01
TWI631517B TWI631517B (en) 2018-08-01

Family

ID=60326170

Family Applications (1)

Application Number Title Priority Date Filing Date
TW106116163A TWI631517B (en) 2016-05-17 2017-05-16 Neural network device

Country Status (5)

Country Link
US (9) US11308383B2 (en)
JP (1) JP6833873B2 (en)
KR (1) KR102182583B1 (en)
TW (1) TWI631517B (en)
WO (1) WO2017200883A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI659428B (en) * 2018-01-12 2019-05-11 中原大學 Method of performing feedforward and recurrent operations in an artificial neural nonvolatile memory network using nonvolatile memory cells
TWI696126B (en) * 2018-02-13 2020-06-11 旺宏電子股份有限公司 Memory device structure for neuromorphic computing system and manufacturing method thereof
TWI719757B (en) * 2019-01-18 2021-02-21 美商超捷公司 Neural network classifier using array of three-gate non-volatile memory cells
TWI720524B (en) * 2019-03-20 2021-03-01 旺宏電子股份有限公司 Method and circuit for performing in-memory multiply-and-accumulate function
US10957392B2 (en) 2018-01-17 2021-03-23 Macronix International Co., Ltd. 2D and 3D sum-of-products array for neuromorphic computing system
US11119674B2 (en) 2019-02-19 2021-09-14 Macronix International Co., Ltd. Memory devices and methods for operating the same
US11138497B2 (en) 2018-07-17 2021-10-05 Macronix International Co., Ltd In-memory computing devices for neural networks
TWI751403B (en) * 2018-01-23 2022-01-01 美商安納富來希股份有限公司 Neural network circuits having non-volatile synapse arrays and neural chip
TWI754162B (en) * 2018-08-27 2022-02-01 美商超捷公司 Analog neuromorphic memory system and method of performing temperature compensation in an analog neuromorphic memory system
TWI787099B (en) * 2018-05-01 2022-12-11 美商超捷公司 Method and apparatus for high voltage generation for analog neural memory in deep learning artificial neural network
US11562229B2 (en) 2018-11-30 2023-01-24 Macronix International Co., Ltd. Convolution accelerator using in-memory computation
US11636325B2 (en) 2018-10-24 2023-04-25 Macronix International Co., Ltd. In-memory data pooling for machine learning
TWI810613B (en) * 2018-03-14 2023-08-01 美商超捷公司 Apparatus for programming analog neural memory in a deep learning artificial neural network
US11934480B2 (en) 2018-12-18 2024-03-19 Macronix International Co., Ltd. NAND block architecture for in-memory multiply-and-accumulate operations

Families Citing this family (90)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10311958B2 (en) 2016-05-17 2019-06-04 Silicon Storage Technology, Inc. Array of three-gate flash memory cells with individual memory cell read, program and erase
US10269440B2 (en) 2016-05-17 2019-04-23 Silicon Storage Technology, Inc. Flash memory array with individual memory cell read, program and erase
JP6833873B2 (en) 2016-05-17 2021-02-24 シリコン ストーリッジ テクノロージー インコーポレイテッドSilicon Storage Technology, Inc. Deep learning neural network classifier using non-volatile memory array
WO2018137177A1 (en) * 2017-01-25 2018-08-02 北京大学 Method for convolution operation based on nor flash array
JP6708146B2 (en) * 2017-03-03 2020-06-10 株式会社デンソー Neural network circuit
US10147019B2 (en) * 2017-03-20 2018-12-04 Sap Se Small object detection
US10580492B2 (en) * 2017-09-15 2020-03-03 Silicon Storage Technology, Inc. System and method for implementing configurable convoluted neural networks with flash memories
US10748630B2 (en) * 2017-11-29 2020-08-18 Silicon Storage Technology, Inc. High precision and highly efficient tuning mechanisms and algorithms for analog neuromorphic memory in artificial neural networks
US11087207B2 (en) * 2018-03-14 2021-08-10 Silicon Storage Technology, Inc. Decoders for analog neural memory in deep learning artificial neural network
US10803943B2 (en) * 2017-11-29 2020-10-13 Silicon Storage Technology, Inc. Neural network classifier using array of four-gate non-volatile memory cells
US11361215B2 (en) 2017-11-29 2022-06-14 Anaflash Inc. Neural network circuits having non-volatile synapse arrays
KR102408858B1 (en) * 2017-12-19 2022-06-14 삼성전자주식회사 A nonvolatile memory device, a memory system including the same and a method of operating a nonvolatile memory device
KR102121562B1 (en) * 2017-12-21 2020-06-10 이화여자대학교 산학협력단 Neuromorphic device using 3d crossbar memory
US10628295B2 (en) * 2017-12-26 2020-04-21 Samsung Electronics Co., Ltd. Computing mechanisms using lookup tables stored on memory
CN108038542B (en) * 2017-12-27 2022-01-07 上海闪易半导体有限公司 Storage module, module and data processing method based on neural network
KR102130532B1 (en) * 2017-12-29 2020-07-07 포항공과대학교 산학협력단 Kernel Hardware Device
US11354562B2 (en) * 2018-01-03 2022-06-07 Silicon Storage Technology, Inc. Programmable neuron for analog non-volatile memory in deep learning artificial neural network
US10446246B2 (en) * 2018-03-14 2019-10-15 Silicon Storage Technology, Inc. Method and apparatus for data refresh for analog non-volatile memory in deep learning neural network
US10580491B2 (en) * 2018-03-23 2020-03-03 Silicon Storage Technology, Inc. System and method for managing peak power demand and noise in non-volatile memory array
CN108509179B (en) * 2018-04-04 2021-11-30 百度在线网络技术(北京)有限公司 Method for detecting human face and device for generating model
US11403518B2 (en) * 2018-04-25 2022-08-02 Denso Corporation Neural network circuit
US10891080B1 (en) 2018-06-04 2021-01-12 Mentium Technologies Inc. Management of non-volatile memory arrays
US11568229B2 (en) * 2018-07-11 2023-01-31 Silicon Storage Technology, Inc. Redundant memory access for rows or columns containing faulty memory cells in analog neural memory in deep learning artificial neural network
US11443175B2 (en) 2018-07-11 2022-09-13 Silicon Storage Technology, Inc. Compensation for reference transistors and memory cells in analog neuro memory in deep learning artificial neural network
US10671891B2 (en) * 2018-07-19 2020-06-02 International Business Machines Corporation Reducing computational costs of deep reinforcement learning by gated convolutional neural network
US20210342678A1 (en) * 2018-07-19 2021-11-04 The Regents Of The University Of California Compute-in-memory architecture for neural networks
CN109284474B (en) * 2018-08-13 2020-09-11 北京大学 Flash memory system and method for realizing image convolution operation with assistance of adder
US10860918B2 (en) 2018-08-21 2020-12-08 Silicon Storage Technology, Inc. Analog neural memory system for deep learning neural network comprising multiple vector-by-matrix multiplication arrays and shared components
US10956814B2 (en) * 2018-08-27 2021-03-23 Silicon Storage Technology, Inc. Configurable analog neural memory system for deep learning neural network
KR20200028168A (en) 2018-09-06 2020-03-16 삼성전자주식회사 Computing apparatus using convolutional neural network and operating method for the same
US10741568B2 (en) 2018-10-16 2020-08-11 Silicon Storage Technology, Inc. Precision tuning for the programming of analog neural memory in a deep learning artificial neural network
US11449268B2 (en) * 2018-11-20 2022-09-20 Samsung Electronics Co., Ltd. Deep solid state device (deep-SSD): a neural network based persistent data storage
CN109558032B (en) * 2018-12-05 2020-09-04 北京三快在线科技有限公司 Operation processing method and device and computer equipment
US11133059B2 (en) 2018-12-06 2021-09-28 Western Digital Technologies, Inc. Non-volatile memory die with deep learning neural network
US11409352B2 (en) * 2019-01-18 2022-08-09 Silicon Storage Technology, Inc. Power management for an analog neural memory in a deep learning artificial neural network
US11893478B2 (en) 2019-01-18 2024-02-06 Silicon Storage Technology, Inc. Programmable output blocks for analog neural memory in a deep learning artificial neural network
US11023559B2 (en) 2019-01-25 2021-06-01 Microsemi Soc Corp. Apparatus and method for combining analog neural net with FPGA routing in a monolithic integrated circuit
JP7270747B2 (en) * 2019-01-29 2023-05-10 シリコン ストーリッジ テクノロージー インコーポレイテッド A neural network classifier using an array of 4-gate non-volatile memory cells
US11144824B2 (en) 2019-01-29 2021-10-12 Silicon Storage Technology, Inc. Algorithms and circuitry for verifying a value stored during a programming operation of a non-volatile memory cell in an analog neural memory in deep learning artificial neural network
US10720217B1 (en) * 2019-01-29 2020-07-21 Silicon Storage Technology, Inc. Memory device and method for varying program state separation based upon frequency of use
US10916306B2 (en) 2019-03-07 2021-02-09 Western Digital Technologies, Inc. Burst mode operation conditioning for a memory device
US10896726B2 (en) * 2019-04-02 2021-01-19 Junsung KIM Method for reading a cross-point type memory array comprising a two-terminal switching material
US11423979B2 (en) * 2019-04-29 2022-08-23 Silicon Storage Technology, Inc. Decoding system and physical layout for analog neural memory in deep learning artificial neural network
US11507642B2 (en) * 2019-05-02 2022-11-22 Silicon Storage Technology, Inc. Configurable input blocks and output blocks and physical layout for analog neural memory in deep learning artificial neural network
US11080152B2 (en) 2019-05-15 2021-08-03 Western Digital Technologies, Inc. Optimized neural network data organization
US11081168B2 (en) * 2019-05-23 2021-08-03 Hefei Reliance Memory Limited Mixed digital-analog memory devices and circuits for secure storage and computing
US11520521B2 (en) 2019-06-20 2022-12-06 Western Digital Technologies, Inc. Storage controller having data augmentation components for use with non-volatile memory die
US11501109B2 (en) 2019-06-20 2022-11-15 Western Digital Technologies, Inc. Non-volatile memory die with on-chip data augmentation components for use with machine learning
US20200410319A1 (en) * 2019-06-26 2020-12-31 Micron Technology, Inc. Stacked artificial neural networks
US11449741B2 (en) 2019-07-19 2022-09-20 Silicon Storage Technology, Inc. Testing circuitry and methods for analog neural memory in artificial neural network
US11393546B2 (en) 2019-07-19 2022-07-19 Silicon Storage Technology, Inc. Testing circuitry and methods for analog neural memory in artificial neural network
KR102448396B1 (en) * 2019-09-16 2022-09-27 포항공과대학교 산학협력단 Capacitance-based neural network with flexible weight bit-width
US11507816B2 (en) * 2019-09-19 2022-11-22 Silicon Storage Technology, Inc. Precision tuning for the programming of analog neural memory in a deep learning artificial neural network
KR102225558B1 (en) 2019-10-14 2021-03-08 연세대학교 산학협력단 Multilayer Computing Circuit Based on Analog Signal Transfer with On-Chip Activation Function
US11755899B2 (en) 2019-11-11 2023-09-12 Silicon Storage Technology, Inc. Precise programming method and apparatus for analog neural memory in an artificial neural network
KR102434119B1 (en) * 2019-12-03 2022-08-19 서울대학교산학협력단 Neural network with a synapse string array
KR102425869B1 (en) * 2019-12-09 2022-07-28 광주과학기술원 CMOS-based crossbar deep learning accelerator
KR20210075542A (en) 2019-12-13 2021-06-23 삼성전자주식회사 Three-dimensional neuromorphic device including switching element and resistive element
KR102556249B1 (en) * 2020-01-02 2023-07-14 서울대학교산학협력단 Synapse string array architectures for neural networks
US11636322B2 (en) 2020-01-03 2023-04-25 Silicon Storage Technology, Inc. Precise data tuning method and apparatus for analog neural memory in an artificial neural network
US11393535B2 (en) 2020-02-26 2022-07-19 Silicon Storage Technology, Inc. Ultra-precise tuning of analog neural memory cells in a deep learning artificial neural network
US11600321B2 (en) 2020-03-05 2023-03-07 Silicon Storage Technology, Inc. Analog neural memory array storing synapsis weights in differential cell pairs in artificial neural network
US11532354B2 (en) 2020-03-22 2022-12-20 Silicon Storage Technology, Inc. Precision tuning of a page or word of non-volatile memory cells and associated high voltage circuits for an analog neural memory array in an artificial neural network
WO2021199386A1 (en) 2020-04-01 2021-10-07 岡島 義憲 Fuzzy string search circuit
US11521085B2 (en) 2020-04-07 2022-12-06 International Business Machines Corporation Neural network weight distribution from a grid of memory elements
US20210350217A1 (en) 2020-05-10 2021-11-11 Silicon Storage Technology, Inc. Analog neural memory array in artificial neural network with source line pulldown mechanism
US11682459B2 (en) 2020-05-13 2023-06-20 Silicon Storage Technology, Inc. Analog neural memory array in artificial neural network comprising logical cells and improved programming mechanism
US11289164B2 (en) 2020-06-03 2022-03-29 Silicon Storage Technology, Inc. Word line and control gate line tandem decoder for analog neural memory in deep learning artificial neural network
US11507835B2 (en) 2020-06-08 2022-11-22 Western Digital Technologies, Inc. Neural network data updates using in-place bit-addressable writes within storage class memory
KR102318819B1 (en) * 2020-06-10 2021-10-27 연세대학교 산학협력단 In-memory device for operation of multi-bit Weight
US11309042B2 (en) * 2020-06-29 2022-04-19 Silicon Storage Technology, Inc. Method of improving read current stability in analog non-volatile memory by program adjustment for memory cells exhibiting random telegraph noise
US11875852B2 (en) 2020-07-06 2024-01-16 Silicon Storage Technology, Inc. Adaptive bias decoder to provide a voltage to a control gate line in an analog neural memory array in artificial neural network
US20220067499A1 (en) 2020-08-25 2022-03-03 Silicon Storage Technology, Inc. Concurrent write and verify operations in an analog neural memory
JP7458960B2 (en) 2020-11-10 2024-04-01 ルネサスエレクトロニクス株式会社 semiconductor equipment
US11914973B2 (en) 2020-11-19 2024-02-27 Apple Inc. Performing multiple bit computation and convolution in memory
EP4298557A1 (en) 2021-02-25 2024-01-03 Silicon Storage Technology Inc. Precise data tuning method and apparatus for analog neural memory in an artificial neural network
WO2022245382A1 (en) * 2021-05-18 2022-11-24 Silicon Storage Technology, Inc. Split array architecture for analog neural memory in a deep learning artificial neural network
US20220392543A1 (en) * 2021-06-02 2022-12-08 Silicon Storage Technology, Inc. Method of improving read current stability in analog non-volatile memory by post-program tuning for memory cells exhibiting random telegraph noise
CN117581300A (en) 2021-07-05 2024-02-20 硅存储技术股份有限公司 Programmable output block of simulated neural memory in deep learning artificial neural network
WO2023146567A1 (en) 2022-01-28 2023-08-03 Silicon Storage Technology, Inc. Artificial neural network comprising an analog array and a digital array
WO2023154075A1 (en) 2022-02-08 2023-08-17 Silicon Storage Technology, Inc. Calibration of electrical parameters in a deep learning artificial neural network
US20230306246A1 (en) 2022-02-08 2023-09-28 Silicon Storage Technology, Inc. Calibration of electrical parameters in a deep learning artificial neural network
WO2023196001A1 (en) 2022-04-06 2023-10-12 Silicon Storage Technology, Inc. Artificial neural network comprising a three-dimensional integrated circuit
WO2023196000A1 (en) 2022-04-07 2023-10-12 Silicon Storage Technology, Inc. Vector-by-matrix-multiplication array utilizing analog inputs
WO2023195999A1 (en) 2022-04-07 2023-10-12 Silicon Storage Technology, Inc. Artificial neural network comprising reference array for i-v slope configuration
WO2023196002A1 (en) 2022-04-07 2023-10-12 Silicon Storage Technology, Inc. Vector-by-matrix-multiplication array utilizing analog outputs
US20240112003A1 (en) 2022-09-22 2024-04-04 Silicon Storage Technology, Inc. Output circuit for artificial neural network array
WO2024063792A1 (en) 2022-09-22 2024-03-28 Silicon Storage Technology, Inc. Verification method and system in artificial neural network array
WO2024063793A1 (en) 2022-09-22 2024-03-28 Silicon Storage Technology, Inc. Input circuit for artificial neural network array
US20240112729A1 (en) 2022-09-22 2024-04-04 Silicon Storage Technology, Inc. Multiple Row Programming Operation In Artificial Neural Network Array

Family Cites Families (137)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2603414B1 (en) 1986-08-29 1988-10-28 Bull Sa READING AMPLIFIER
JPH06103782B2 (en) 1987-04-17 1994-12-14 日本シイエムケイ株式会社 Printed wiring board
JPS63261874A (en) * 1987-04-20 1988-10-28 Nippon Telegr & Teleph Corp <Ntt> Semiconductor integrated circuit
US5055897A (en) * 1988-07-27 1991-10-08 Intel Corporation Semiconductor cell for neural network and the like
US4904881A (en) 1989-02-10 1990-02-27 Intel Corporation EXCLUSIVE-OR cell for neural network and the like
JP3122756B2 (en) 1991-01-12 2001-01-09 直 柴田 Semiconductor device
US5621336A (en) 1989-06-02 1997-04-15 Shibata; Tadashi Neuron circuit
JPH0318985A (en) * 1989-06-16 1991-01-28 Hitachi Ltd Information processor
US5028810A (en) 1989-07-13 1991-07-02 Intel Corporation Four quadrant synapse cell employing single column summing line
US4961002A (en) 1989-07-13 1990-10-02 Intel Corporation Synapse cell employing dual gate transistor structure
GB2236881B (en) * 1989-10-11 1994-01-12 Intel Corp Improved synapse cell employing dual gate transistor structure
KR920010344B1 (en) 1989-12-29 1992-11-27 삼성전자주식회사 Memory array composition method
US5029130A (en) 1990-01-22 1991-07-02 Silicon Storage Technology, Inc. Single transistor non-valatile electrically alterable semiconductor memory device
US5242848A (en) * 1990-01-22 1993-09-07 Silicon Storage Technology, Inc. Self-aligned method of making a split gate single transistor non-volatile electrically alterable semiconductor memory device
WO1991018349A1 (en) * 1990-05-22 1991-11-28 International Business Machines Corporation Scalable flow virtual learning neurocomputer
US5150450A (en) 1990-10-01 1992-09-22 The United States Of America As Represented By The Secretary Of The Navy Method and circuits for neuron perturbation in artificial neural network memory modification
US5146602A (en) * 1990-12-26 1992-09-08 Intel Corporation Method of increasing the accuracy of an analog neural network and the like
US5138576A (en) 1991-11-06 1992-08-11 Altera Corporation Method and apparatus for erasing an array of electrically erasable EPROM cells
US7071060B1 (en) 1996-02-28 2006-07-04 Sandisk Corporation EEPROM with split gate source side infection with sidewall spacers
DE69319162T2 (en) 1992-03-26 1999-03-25 Hitachi Ltd Flash memory
US5336936A (en) 1992-05-06 1994-08-09 Synaptics, Incorporated One-transistor adaptable analog storage element and array
US5264734A (en) 1992-05-19 1993-11-23 Intel Corporation Difference calculating neural network utilizing switched capacitors
US5256911A (en) * 1992-06-10 1993-10-26 Intel Corporation Neural network with multiplexed snyaptic processing
US5298796A (en) 1992-07-08 1994-03-29 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Nonvolatile programmable neural network synaptic array
US5386132A (en) 1992-11-02 1995-01-31 Wong; Chun C. D. Multimedia storage system with highly compact memory device
JP2835272B2 (en) 1993-12-21 1998-12-14 株式会社東芝 Semiconductor storage device
US5422846A (en) * 1994-04-04 1995-06-06 Motorola Inc. Nonvolatile memory having overerase protection
US5583808A (en) * 1994-09-16 1996-12-10 National Semiconductor Corporation EPROM array segmented for high performance and method for controlling same
KR0151623B1 (en) 1994-12-07 1998-10-01 문정환 Eeprom cell and its making method
US5990512A (en) 1995-03-07 1999-11-23 California Institute Of Technology Hole impact ionization mechanism of hot electron injection and four-terminal ρFET semiconductor structure for long-term learning
US5825063A (en) 1995-03-07 1998-10-20 California Institute Of Technology Three-terminal silicon synaptic device
US6965142B2 (en) 1995-03-07 2005-11-15 Impinj, Inc. Floating-gate semiconductor structures
US5554874A (en) 1995-06-05 1996-09-10 Quantum Effect Design, Inc. Six-transistor cell with wide bit-line pitch, double words lines, and bit-line contact shared among four cells
US5721702A (en) 1995-08-01 1998-02-24 Micron Quantum Devices, Inc. Reference voltage generator using flash memory cells
US5966332A (en) 1995-11-29 1999-10-12 Sanyo Electric Co., Ltd. Floating gate memory cell array allowing cell-by-cell erasure
US6683645B1 (en) 1995-12-01 2004-01-27 Qinetiq Limited Imaging system with low sensitivity to variation in scene illumination
US5748534A (en) 1996-03-26 1998-05-05 Invox Technology Feedback loop for reading threshold voltage
TW420806B (en) * 1998-03-06 2001-02-01 Sanyo Electric Co Non-volatile semiconductor memory device
US6389404B1 (en) * 1998-12-30 2002-05-14 Irvine Sensors Corporation Neural processing module with input architectures that make maximal use of a weighted synapse array
US6222777B1 (en) 1999-04-09 2001-04-24 Sun Microsystems, Inc. Output circuit for alternating multiple bit line per column memory architecture
US6232180B1 (en) 1999-07-02 2001-05-15 Taiwan Semiconductor Manufacturing Corporation Split gate flash memory cell
US6258668B1 (en) * 1999-11-24 2001-07-10 Aplus Flash Technology, Inc. Array architecture and process flow of nonvolatile memory devices for mass storage applications
US6282119B1 (en) * 2000-06-02 2001-08-28 Winbond Electronics Corporation Mixed program and sense architecture using dual-step voltage scheme in multi-level data storage in flash memories
US6829598B2 (en) 2000-10-02 2004-12-07 Texas Instruments Incorporated Method and apparatus for modeling a neural synapse function by utilizing a single conventional MOSFET
US6563167B2 (en) 2001-01-05 2003-05-13 Silicon Storage Technology, Inc. Semiconductor memory array of floating gate memory cells with floating gates having multiple sharp edges
US6563733B2 (en) 2001-05-24 2003-05-13 Winbond Electronics Corporation Memory array architectures based on a triple-polysilicon source-side injection non-volatile memory cell
JP2005522071A (en) 2002-03-22 2005-07-21 ジョージア テック リサーチ コーポレイション Floating gate analog circuit
US6747310B2 (en) * 2002-10-07 2004-06-08 Actrans System Inc. Flash memory cells with separated self-aligned select and erase gates, and process of fabrication
US6898129B2 (en) * 2002-10-25 2005-05-24 Freescale Semiconductor, Inc. Erase of a memory having a non-conductive storage medium
JP2004171686A (en) 2002-11-20 2004-06-17 Renesas Technology Corp Nonvolatile semiconductor memory device, and data erasing method therefor
JP4601287B2 (en) 2002-12-26 2010-12-22 ルネサスエレクトロニクス株式会社 Nonvolatile semiconductor memory device
US6822910B2 (en) 2002-12-29 2004-11-23 Macronix International Co., Ltd. Non-volatile memory and operating method thereof
US6781186B1 (en) 2003-01-30 2004-08-24 Silicon-Based Technology Corp. Stack-gate flash cell structure having a high coupling ratio and its contactless flash memory arrays
US6856551B2 (en) 2003-02-06 2005-02-15 Sandisk Corporation System and method for programming cells in non-volatile integrated memory devices
US6946894B2 (en) * 2003-06-12 2005-09-20 Winbond Electronics Corporation Current-mode synapse multiplier circuit
WO2005038645A2 (en) * 2003-10-16 2005-04-28 Canon Kabushiki Kaisha Operation circuit and operation control method thereof
TWI220560B (en) 2003-10-27 2004-08-21 Powerchip Semiconductor Corp NAND flash memory cell architecture, NAND flash memory cell array, manufacturing method and operating method of the same
US7315056B2 (en) 2004-06-07 2008-01-01 Silicon Storage Technology, Inc. Semiconductor memory array of floating gate memory cells with program/erase and select gates
US7092290B2 (en) 2004-11-16 2006-08-15 Sandisk Corporation High speed programming system with reduced over programming
TWI270199B (en) 2005-01-31 2007-01-01 Powerchip Semiconductor Corp Non-volatile memory and manufacturing method and operating method thereof
US8443169B2 (en) * 2005-03-28 2013-05-14 Gerald George Pechanek Interconnection network connecting operation-configurable nodes according to one or more levels of adjacency in multiple dimensions of communication in a multi-processor and a neural processor
US7304890B2 (en) 2005-12-13 2007-12-04 Atmel Corporation Double byte select high voltage line for EEPROM memory block
JP4364227B2 (en) * 2006-09-29 2009-11-11 株式会社東芝 Semiconductor memory device
US7626868B1 (en) 2007-05-04 2009-12-01 Flashsilicon, Incorporation Level verification and adjustment for multi-level cell (MLC) non-volatile memory (NVM)
KR100910869B1 (en) 2007-06-08 2009-08-06 주식회사 하이닉스반도체 Semiconductor Memory Device that uses less channel when it's under test
US7733262B2 (en) 2007-06-15 2010-06-08 Micron Technology, Inc. Quantizing circuits with variable reference signals
US7630246B2 (en) 2007-06-18 2009-12-08 Micron Technology, Inc. Programming rate identification and control in a solid state memory
US20090039410A1 (en) 2007-08-06 2009-02-12 Xian Liu Split Gate Non-Volatile Flash Memory Cell Having A Floating Gate, Control Gate, Select Gate And An Erase Gate With An Overhang Over The Floating Gate, Array And Method Of Manufacturing
US8320191B2 (en) * 2007-08-30 2012-11-27 Infineon Technologies Ag Memory cell arrangement, method for controlling a memory cell, memory array and electronic device
JP2009080892A (en) * 2007-09-26 2009-04-16 Toshiba Corp Semiconductor storage device
US7567457B2 (en) * 2007-10-30 2009-07-28 Spansion Llc Nonvolatile memory array architecture
US7894267B2 (en) 2007-10-30 2011-02-22 Spansion Llc Deterministic programming algorithm that provides tighter cell distributions with a reduced number of programming pulses
US7916551B2 (en) * 2007-11-06 2011-03-29 Macronix International Co., Ltd. Method of programming cell in memory and memory apparatus utilizing the method
US7746698B2 (en) * 2007-12-13 2010-06-29 Spansion Llc Programming in memory devices using source bitline voltage bias
KR20090075062A (en) * 2008-01-03 2009-07-08 삼성전자주식회사 Semiconductor memory device comprising memory cell array having dynamic memory cells using floating body transistors
JP4513865B2 (en) * 2008-01-25 2010-07-28 セイコーエプソン株式会社 Parallel computing device and parallel computing method
JP5092938B2 (en) * 2008-06-30 2012-12-05 富士通セミコンダクター株式会社 Semiconductor memory device and driving method thereof
JP2010267341A (en) * 2009-05-15 2010-11-25 Renesas Electronics Corp Semiconductor device
WO2011115769A2 (en) * 2010-03-15 2011-09-22 California Institute Of Technology System and method for cognitive processing for data fusion
JP5300773B2 (en) 2010-03-29 2013-09-25 ルネサスエレクトロニクス株式会社 Nonvolatile semiconductor memory device
US9665822B2 (en) 2010-06-30 2017-05-30 International Business Machines Corporation Canonical spiking neuron network for spatiotemporal associative memory
US8325521B2 (en) 2010-10-08 2012-12-04 Taiwan Semiconductor Manufacturing Company, Ltd. Structure and inhibited operation of flash memory with split gate
US8473439B2 (en) 2010-12-08 2013-06-25 International Business Machines Corporation Integrate and fire electronic neurons
US8892487B2 (en) 2010-12-30 2014-11-18 International Business Machines Corporation Electronic synapses for reinforcement learning
JP2012160244A (en) * 2011-02-02 2012-08-23 Lapis Semiconductor Co Ltd Semiconductor nonvolatile memory
JP2013041654A (en) 2011-08-19 2013-02-28 Toshiba Corp Nonvolatile storage device
US8909576B2 (en) * 2011-09-16 2014-12-09 International Business Machines Corporation Neuromorphic event-driven neural computing architecture in a scalable neural network
US8760955B2 (en) 2011-10-21 2014-06-24 Taiwan Semiconductor Manufacturing Company, Ltd. Electrical fuse memory arrays
WO2014021150A1 (en) * 2012-07-31 2014-02-06 シャープ株式会社 Display device and driving method therefor
US9466732B2 (en) 2012-08-23 2016-10-11 Silicon Storage Technology, Inc. Split-gate memory cell with depletion-mode floating gate channel, and method of making same
US9153230B2 (en) * 2012-10-23 2015-10-06 Google Inc. Mobile speech recognition hardware accelerator
CN103000218A (en) 2012-11-20 2013-03-27 上海宏力半导体制造有限公司 Memory circuit
US9275748B2 (en) 2013-03-14 2016-03-01 Silicon Storage Technology, Inc. Low leakage, low threshold voltage, split-gate flash cell operation
WO2015001697A1 (en) 2013-07-04 2015-01-08 パナソニックIpマネジメント株式会社 Neural network circuit and learning method thereof
US9753959B2 (en) 2013-10-16 2017-09-05 University Of Tennessee Research Foundation Method and apparatus for constructing a neuroscience-inspired artificial neural network with visualization of neural pathways
US9025386B1 (en) * 2013-11-20 2015-05-05 International Business Machines Corporation Embedded charge trap multi-time-programmable-read-only-memory for high performance logic technology
US9146886B2 (en) * 2014-01-06 2015-09-29 International Business Machines Corporation Deterministic message processing in a direct memory access adapter
US20150213898A1 (en) 2014-01-27 2015-07-30 Silicon Storage Technololgy, Inc. Byte Erasable Non-volatile Memory Architecture And Method Of Erasing Same
US20150324691A1 (en) 2014-05-07 2015-11-12 Seagate Technology Llc Neural network connections using nonvolatile memory devices
US9286982B2 (en) 2014-08-08 2016-03-15 Silicon Storage Technology, Inc. Flash memory system with EEPROM functionality
US9760533B2 (en) * 2014-08-14 2017-09-12 The Regents On The University Of Michigan Floating-gate transistor array for performing weighted sum computation
US9984754B2 (en) 2014-09-29 2018-05-29 Toshiba Memory Corporation Memory device and method for operating the same
US10312248B2 (en) 2014-11-12 2019-06-04 Silicon Storage Technology, Inc. Virtual ground non-volatile memory array
US9361991B1 (en) 2014-12-23 2016-06-07 Sandisk Technologies Inc. Efficient scanning of nonvolatile memory blocks
CN104615909B (en) 2015-02-02 2018-02-13 天津大学 Izhikevich neuroid synchronous discharge emulation platforms based on FPGA
CN105990367B (en) 2015-02-27 2019-03-12 硅存储技术公司 Nonvolatile memory unit array with ROM cell
US10474948B2 (en) * 2015-03-27 2019-11-12 University Of Dayton Analog neuromorphic circuit implemented using resistive memories
US9659604B1 (en) * 2015-12-07 2017-05-23 Globalfoundries Inc. Dual-bit 3-T high density MTPROM array
US10698975B2 (en) 2016-01-27 2020-06-30 Hewlett Packard Enterprise Development Lp In situ transposition
US20170330070A1 (en) 2016-02-28 2017-11-16 Purdue Research Foundation Spin orbit torque based electronic neuron
JP6833873B2 (en) 2016-05-17 2021-02-24 シリコン ストーリッジ テクノロージー インコーポレイテッドSilicon Storage Technology, Inc. Deep learning neural network classifier using non-volatile memory array
US10311958B2 (en) 2016-05-17 2019-06-04 Silicon Storage Technology, Inc. Array of three-gate flash memory cells with individual memory cell read, program and erase
US10269440B2 (en) 2016-05-17 2019-04-23 Silicon Storage Technology, Inc. Flash memory array with individual memory cell read, program and erase
US9910827B2 (en) 2016-07-01 2018-03-06 Hewlett Packard Enterprise Development Lp Vector-matrix multiplications involving negative values
US10346347B2 (en) 2016-10-03 2019-07-09 The Regents Of The University Of Michigan Field-programmable crossbar array for reconfigurable computing
US20180131946A1 (en) * 2016-11-07 2018-05-10 Electronics And Telecommunications Research Institute Convolution neural network system and method for compressing synapse data of convolution neural network
CN110574043B (en) 2016-12-09 2023-09-15 许富菖 Three-dimensional neural network array
US10860923B2 (en) 2016-12-20 2020-12-08 Samsung Electronics Co., Ltd. High-density neuromorphic computing element
KR20180073118A (en) * 2016-12-22 2018-07-02 삼성전자주식회사 Convolutional neural network processing method and apparatus
KR102449586B1 (en) 2017-02-24 2022-10-04 에이에스엠엘 네델란즈 비.브이. Methods of determining process models by machine learning
US10748059B2 (en) 2017-04-05 2020-08-18 International Business Machines Corporation Architecture for an electrochemical artificial neural network
KR20200010496A (en) 2017-05-26 2020-01-30 에이에스엠엘 네델란즈 비.브이. Assist feature placement based on machine learning
US10460817B2 (en) 2017-07-13 2019-10-29 Qualcomm Incorporated Multiple (multi-) level cell (MLC) non-volatile (NV) memory (NVM) matrix circuits for performing matrix computations with multi-bit input vectors
US10482929B2 (en) 2017-07-13 2019-11-19 Qualcomm Incorporated Non-volative (NV) memory (NVM) matrix circuits employing NVM matrix circuits for performing matrix computations
US10580492B2 (en) 2017-09-15 2020-03-03 Silicon Storage Technology, Inc. System and method for implementing configurable convoluted neural networks with flash memories
CN109522753B (en) 2017-09-18 2020-11-06 清华大学 Circuit structure and driving method thereof, chip and authentication method thereof, and electronic device
US10303998B2 (en) 2017-09-28 2019-05-28 International Business Machines Corporation Floating gate for neural network inference
US11354562B2 (en) 2018-01-03 2022-06-07 Silicon Storage Technology, Inc. Programmable neuron for analog non-volatile memory in deep learning artificial neural network
US10552510B2 (en) 2018-01-11 2020-02-04 Mentium Technologies Inc. Vector-by-matrix multiplier modules based on non-volatile 2D and 3D memory arrays
US10740181B2 (en) 2018-03-06 2020-08-11 Western Digital Technologies, Inc. Failed storage device rebuild method
US10496374B2 (en) 2018-03-22 2019-12-03 Hewlett Packard Enterprise Development Lp Crossbar array operations using ALU modified signals
US10217512B1 (en) 2018-05-15 2019-02-26 International Business Machines Corporation Unit cell with floating gate MOSFET for analog memory
US10692570B2 (en) 2018-07-11 2020-06-23 Sandisk Technologies Llc Neural network matrix multiplication in memory cells
US11061646B2 (en) 2018-09-28 2021-07-13 Intel Corporation Compute in memory circuits with multi-Vdd arrays and/or analog multipliers
US10891222B2 (en) 2018-12-24 2021-01-12 Macronix International Co., Ltd. Memory storage device and operation method thereof for implementing inner product operation
US11270763B2 (en) 2019-01-18 2022-03-08 Silicon Storage Technology, Inc. Neural network classifier using array of three-gate non-volatile memory cells
US10741611B1 (en) 2019-02-11 2020-08-11 International Business Machines Corporation Resistive processing units with complementary metal-oxide-semiconductor non-volatile analog memory

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI659428B (en) * 2018-01-12 2019-05-11 中原大學 Method of performing feedforward and recurrent operations in an artificial neural nonvolatile memory network using nonvolatile memory cells
US10957392B2 (en) 2018-01-17 2021-03-23 Macronix International Co., Ltd. 2D and 3D sum-of-products array for neuromorphic computing system
TWI751403B (en) * 2018-01-23 2022-01-01 美商安納富來希股份有限公司 Neural network circuits having non-volatile synapse arrays and neural chip
TWI696126B (en) * 2018-02-13 2020-06-11 旺宏電子股份有限公司 Memory device structure for neuromorphic computing system and manufacturing method thereof
TWI810613B (en) * 2018-03-14 2023-08-01 美商超捷公司 Apparatus for programming analog neural memory in a deep learning artificial neural network
TWI787099B (en) * 2018-05-01 2022-12-11 美商超捷公司 Method and apparatus for high voltage generation for analog neural memory in deep learning artificial neural network
US11138497B2 (en) 2018-07-17 2021-10-05 Macronix International Co., Ltd In-memory computing devices for neural networks
TWI754162B (en) * 2018-08-27 2022-02-01 美商超捷公司 Analog neuromorphic memory system and method of performing temperature compensation in an analog neuromorphic memory system
US11636325B2 (en) 2018-10-24 2023-04-25 Macronix International Co., Ltd. In-memory data pooling for machine learning
US11562229B2 (en) 2018-11-30 2023-01-24 Macronix International Co., Ltd. Convolution accelerator using in-memory computation
US11934480B2 (en) 2018-12-18 2024-03-19 Macronix International Co., Ltd. NAND block architecture for in-memory multiply-and-accumulate operations
TWI737079B (en) * 2019-01-18 2021-08-21 美商超捷公司 Neural network classifier using array of two-gate non-volatile memory cells
TWI719757B (en) * 2019-01-18 2021-02-21 美商超捷公司 Neural network classifier using array of three-gate non-volatile memory cells
US11119674B2 (en) 2019-02-19 2021-09-14 Macronix International Co., Ltd. Memory devices and methods for operating the same
US11132176B2 (en) 2019-03-20 2021-09-28 Macronix International Co., Ltd. Non-volatile computing method in flash memory
TWI720524B (en) * 2019-03-20 2021-03-01 旺宏電子股份有限公司 Method and circuit for performing in-memory multiply-and-accumulate function

Also Published As

Publication number Publication date
US20230229888A1 (en) 2023-07-20
US20230252265A1 (en) 2023-08-10
US20230259738A1 (en) 2023-08-17
KR20190008912A (en) 2019-01-25
US20230229887A1 (en) 2023-07-20
US11790208B2 (en) 2023-10-17
JP2019517138A (en) 2019-06-20
US20210232893A1 (en) 2021-07-29
US20170337466A1 (en) 2017-11-23
KR102182583B1 (en) 2020-11-24
US20200151543A1 (en) 2020-05-14
US11853856B2 (en) 2023-12-26
US20210287065A1 (en) 2021-09-16
US11829859B2 (en) 2023-11-28
US20230206026A1 (en) 2023-06-29
WO2017200883A1 (en) 2017-11-23
TWI631517B (en) 2018-08-01
US20220147794A1 (en) 2022-05-12
JP6833873B2 (en) 2021-02-24
US11308383B2 (en) 2022-04-19

Similar Documents

Publication Publication Date Title
US11790208B2 (en) Output circuitry for non-volatile memory array in neural network
KR102331445B1 (en) High-precision and high-efficiency tuning mechanisms and algorithms for analog neuromorphic memory in artificial neural networks.
KR102307675B1 (en) Systems and methods for implementing configurable convolutional neural networks with flash memory
KR102607529B1 (en) Neural network classifier using an array of 3-gate non-volatile memory cells
CN109196528B (en) Deep learning neural network classifier using non-volatile memory array
JP2022519041A (en) Neural network classifier using an array of stack gate non-volatile memory cells
KR102350213B1 (en) Neural Network Classifier Using an Array of 4-Gated Non-Volatile Memory Cells