WO2022110327A1 - Neuronal network unit - Google Patents

Neuronal network unit Download PDF

Info

Publication number
WO2022110327A1
WO2022110327A1 PCT/CN2020/135749 CN2020135749W WO2022110327A1 WO 2022110327 A1 WO2022110327 A1 WO 2022110327A1 CN 2020135749 W CN2020135749 W CN 2020135749W WO 2022110327 A1 WO2022110327 A1 WO 2022110327A1
Authority
WO
WIPO (PCT)
Prior art keywords
control transistor
read control
magnetic tunnel
tunnel junction
network unit
Prior art date
Application number
PCT/CN2020/135749
Other languages
French (fr)
Chinese (zh)
Inventor
孔繁生
周华
Original Assignee
光华临港工程应用技术研发(上海)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 光华临港工程应用技术研发(上海)有限公司 filed Critical 光华临港工程应用技术研发(上海)有限公司
Publication of WO2022110327A1 publication Critical patent/WO2022110327A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/061Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using biological neurons, e.g. biological neurons connected to an integrated circuit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means

Definitions

  • the invention relates to the field of artificial intelligence, in particular to a neural network unit.
  • FIG. 1 is a schematic diagram of a typical neuron network architecture and a corresponding circuit, including N ⁇ M synaptic weight components.
  • the corresponding circuit includes N ⁇ M cell arrays, and each cell includes a non-volatile memory (NVM, Non-volatile memory) composed of two symmetrically arranged transistors and corresponding variable resistors.
  • the technical problem to be solved by the present invention is to provide a neuron network unit, which can reduce the circuit area.
  • the present invention provides a neuron network unit, comprising: an inhibitory read control transistor and an excitatory read control transistor arranged in parallel; a write control transistor vertically stacked on top of the above two transistors; the write control transistor The control transistor is arranged opposite to the source of the read control transistor; the drain of the write control transistor is electrically connected to the first end of a magnetic tunnel junction; the drain of the inhibitive read control transistor is connected to the The second end of the magnetic tunnel junction is electrically connected; the drain of the excitatory read control transistor is electrically connected to the second end of the magnetic tunnel junction through a weighted signal line.
  • the invention reduces the total chip area, reduces the physical distance between the write control transistor and the read control transistor, and reduces the parasitic effect and signal caused by the long connection. noise, and clock delays.
  • multiple transistors share a magnetic tunnel junction, so that when the excitation signal and the inhibitory signal are opposite, the levels are exactly the same and canceled, which improves the accuracy of the deep learning algorithm.
  • Figure 1 shows a typical neural network architecture and a corresponding circuit diagram in the prior art.
  • FIG. 2 is a schematic diagram of the operation mode of the neuron network shown in FIG. 1 in the working process.
  • FIG. 3 is a schematic structural diagram of a neuron network unit according to a specific embodiment of the invention.
  • FIG. 3 is a schematic structural diagram of a neuron network unit according to this specific embodiment, including a write control transistor T1, an inhibitory read control transistor T21 and an excitatory read control transistor T22, the write control transistor T1 is in the The top and source up and the inhibitory read control transistor T21 and the excitatory read control transistor T22 are in the bottom and source down.
  • the write control transistor T1, the inhibitory read control transistor T21, and the excitatory read control transistor T22 all include a source S+/-, a drain D+/-, and a vertical channel between the source and the drain, the gate G+/- is arranged around the vertical channel.
  • an inhibitory read control transistor and an excitatory read control transistor are taken as examples to disclose the plurality of transistors arranged in parallel.
  • multiple inhibitory read control transistors T21 and excitatory read control transistors T22 may also be provided, and different transistors are controlled by software to be gated to realize feedback of multiple neurons.
  • the drain of the write control transistor is electrically connected to the first end of a storage medium unit.
  • the storage medium unit is a magnetic tunnel junction MTJ, that is, the drain of the write control transistor is electrically connected to the first end of a magnetic tunnel junction MTJ.
  • the medium storage unit may also be a phase-change storage unit, a magnetoresistive storage unit, or other medium structures that store data through resistance modal changes.
  • the drains of the inhibitory read control transistor T21 and the excitatory read control transistor T22 are electrically connected to the second terminal of the magnetic tunnel junction MTJ through the same weight line W.
  • the first end of the magnetic tunnel junction includes a first ferromagnetic sheet 01
  • the second end includes a second ferromagnetic sheet 02 .
  • the magnetic tunnel junction MTJ further includes an insulating interlayer 03 between the first ferromagnetic sheet 01 and the second ferromagnetic sheet 02 .
  • the thickness of the insulating interlayer 03 is about 0.1 nm ⁇ 2.0 nm.
  • the spin-orbit torque (SOT, Spin Orbit Torque) effect is used to flip the magnetic moment of the free layer in the MTJ, which is the above-mentioned SOT MRAM.
  • the write control transistor T1 can be used to control the state of the magnetic tunnel junction MTJ in the above neuron network after being turned on: the write control transistor T1 is turned on and a sufficiently large current is applied to the magnetic tunnel junction MTJ through it, The state of the magnetic tunnel junction MTJ can be rewritten to play the role of logic configuration; the control transistor T1 is turned on and the level used for reading is applied to the magnetic tunnel junction MTJ through it, then the output current of the weight line W is related to the magnetic tunnel junction. The resistance of the MTJ is related and plays an output role.
  • the inhibitory read control transistor T21 or the excitatory read control transistor T22 is turned on by software to enter the working state.
  • the software records these two values separately in the background and performs operations to obtain the final result.
  • the two transistors share a magnetic tunnel junction, so that when the excitation signal and the inhibitory signal are opposite, the levels are exactly the same and canceled, which improves the accuracy of the deep learning algorithm.
  • the above technical solution optimizes the stacking of two transistors, and involves the stacking of the two transistors, thereby reducing the overall area of the circuit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Theoretical Computer Science (AREA)
  • Molecular Biology (AREA)
  • Neurology (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Hall/Mr Elements (AREA)
  • Mram Or Spin Memory Techniques (AREA)

Abstract

A neuronal network unit, comprising: an inhibitory read control transistor (T21) and an excitatory read control transistor (T22) which are provided in parallel, and a write control transistor (T1) provided to be opposite the two transistors (T21, T22) in a stacked manner longitudinally; the source electrode of the write control transistor (T1) and the source electrodes of the read control transistors (T21, T22) are arranged facing away from each other; the drain electrode of the write control transistor (T1) is electrically connected to a first end of a magnetic tunnel junction (MTJ); the drain electrode of the inhibitory read control transistor (T21) is electrically connected to a second end of the magnetic tunnel junction (MTJ) by means of a weight signal line (W); and the drain electrode of the excitatory read control transistor (T22) is electrically connected to the second end of the magnetic tunnel junction (MTJ) by means of the weight signal line (W). By arranging the transistors of the neuronal network unit in a vertically stacked manner, the total area of a chip is reduced, and the two transistors share a magnetic tunnel junction (MTJ), so that in cases where an excitation signal and a suppression signal are opposite, the levels are completely consistent and are counteracted, improving the accuracy of a deep learning algorithm.

Description

神经元网络单元neural network unit 技术领域technical field
本发明涉及人工智能领域,尤其涉及一种神经元网络单元。The invention relates to the field of artificial intelligence, in particular to a neural network unit.
背景技术Background technique
附图1所示是一种典型的神经元网络架构以及对应的电路示意图,包括N×M个突触权值(synaptic weight)分量。对应的电路包括N×M个元胞(cell)阵列,每个元胞包括两个对称设置的晶体管和相应的可变电阻组成的一个非易失存储器(NVM,Non-volatile memory)。附图2所示是上述神经元网络在工作过程中的运算模式,每一个点位的权重W=G +-G -FIG. 1 is a schematic diagram of a typical neuron network architecture and a corresponding circuit, including N×M synaptic weight components. The corresponding circuit includes N×M cell arrays, and each cell includes a non-volatile memory (NVM, Non-volatile memory) composed of two symmetrically arranged transistors and corresponding variable resistors. FIG. 2 shows the operation mode of the above-mentioned neuron network in the working process, and the weight of each point is W=G + -G - .
从上述图示可以看出,一个完整的神经元网络电路芯片需要大量的晶体管构成阵列,因此如何能够降低神经元网络芯片的总面积是现有技术中面临的重要问题。It can be seen from the above figure that a complete neuron network circuit chip requires a large number of transistors to form an array, so how to reduce the total area of the neuron network chip is an important problem faced in the prior art.
发明内容SUMMARY OF THE INVENTION
本发明所要解决的技术问题是,提供一种神经元网络单元,能够降低电路面积。The technical problem to be solved by the present invention is to provide a neuron network unit, which can reduce the circuit area.
为了解决上述问题,本发明提供了一种神经元网络单元,包括:并列设置的抑制性读控制晶体管和兴奋性读控制晶体管;与上述两晶体管纵向堆叠对顶设置的写控制晶体管;所述写控制晶体管与读控制晶体管的源极相背设置;所述写控制晶体管的漏极与一磁性隧道结的第一端电学连接;所述抑制性读控制晶体管的漏极通过权重信号线与所述磁性隧道结的第二端电学连接;所述兴奋性读控制晶体管的漏极通过权重信号线与所述磁性隧道结的第二端电学连接。。In order to solve the above problems, the present invention provides a neuron network unit, comprising: an inhibitory read control transistor and an excitatory read control transistor arranged in parallel; a write control transistor vertically stacked on top of the above two transistors; the write control transistor The control transistor is arranged opposite to the source of the read control transistor; the drain of the write control transistor is electrically connected to the first end of a magnetic tunnel junction; the drain of the inhibitive read control transistor is connected to the The second end of the magnetic tunnel junction is electrically connected; the drain of the excitatory read control transistor is electrically connected to the second end of the magnetic tunnel junction through a weighted signal line. .
本发明通过将神经元网路单元的晶体管上下堆叠设置,降低了芯片总面积,并且写控制晶体管和读控制晶体管之间的物理距离降低,降低了连线过长而带来的寄生效应、信号噪声、以及时钟延时。并且多个晶体管共用一个磁隧道结,可以做到兴奋信号和抑制信号在相反的情况下,电平完全一致而抵消,提高了深度学习算法的精确性。By stacking the transistors of the neuron network unit on top of each other, the invention reduces the total chip area, reduces the physical distance between the write control transistor and the read control transistor, and reduces the parasitic effect and signal caused by the long connection. noise, and clock delays. In addition, multiple transistors share a magnetic tunnel junction, so that when the excitation signal and the inhibitory signal are opposite, the levels are exactly the same and canceled, which improves the accuracy of the deep learning algorithm.
附图说明Description of drawings
附图1所示是现有技术中一种典型的神经元网络架构以及对应的电路示意 图。Figure 1 shows a typical neural network architecture and a corresponding circuit diagram in the prior art.
附图2所示是附图1所示神经元网络在工作过程中的运算模式示意图。FIG. 2 is a schematic diagram of the operation mode of the neuron network shown in FIG. 1 in the working process.
附图3所示是发明一本具体实施方式所述的一种神经元网络单元的结构示意图。FIG. 3 is a schematic structural diagram of a neuron network unit according to a specific embodiment of the invention.
具体实施方式Detailed ways
下面结合附图对本发明提供的神经元网络单元的具体实施方式做详细说明。The specific embodiments of the neuron network unit provided by the present invention will be described in detail below with reference to the accompanying drawings.
附图3所示是本具体实施方式所述的一种神经元网络单元的结构示意图,包括写控制晶体管T1、抑制性读控制晶体管T21和兴奋性读控制晶体管T22,所述写控制晶体管T1在上部且源极向上并,所述抑制性读控制晶体管T21和兴奋性读控制晶体管T22在下部且源极向下。所述写控制晶体管T1、抑制性读控制晶体管T21、和兴奋性读控制晶体管T22均包括源极S+/-、漏极D+/-,以及源极和漏极之间的垂直沟道,栅极G+/-环绕所述垂直沟道设置。在本具体实施方式中,以一抑制性读控制晶体管和一兴奋性读控制晶体管为例揭露所述并列设置的多个晶体管。在其他的具体实施方式中,抑制性读控制晶体管T21和兴奋性读控制晶体管T22还可以设置多个,并通过软件控制不同的晶体管进行选通,实现多路神经元的反馈。FIG. 3 is a schematic structural diagram of a neuron network unit according to this specific embodiment, including a write control transistor T1, an inhibitory read control transistor T21 and an excitatory read control transistor T22, the write control transistor T1 is in the The top and source up and the inhibitory read control transistor T21 and the excitatory read control transistor T22 are in the bottom and source down. The write control transistor T1, the inhibitory read control transistor T21, and the excitatory read control transistor T22 all include a source S+/-, a drain D+/-, and a vertical channel between the source and the drain, the gate G+/- is arranged around the vertical channel. In this embodiment, an inhibitory read control transistor and an excitatory read control transistor are taken as examples to disclose the plurality of transistors arranged in parallel. In other specific embodiments, multiple inhibitory read control transistors T21 and excitatory read control transistors T22 may also be provided, and different transistors are controlled by software to be gated to realize feedback of multiple neurons.
所述写控制晶体管的漏极与一存储介质单元的第一端电学连接。在本具体实施方式中,所述存储介质单元为一磁性隧道姐MTJ,即所述写控制晶体管的漏极与一磁性隧道结MTJ的第一端电学连接。在其他的具体实施方式中,所述所述介质存储单元还可以是相变存储单元、磁阻存储单元等其他通过电阻模态变化来存储数据的介质结构。The drain of the write control transistor is electrically connected to the first end of a storage medium unit. In this specific embodiment, the storage medium unit is a magnetic tunnel junction MTJ, that is, the drain of the write control transistor is electrically connected to the first end of a magnetic tunnel junction MTJ. In other specific implementation manners, the medium storage unit may also be a phase-change storage unit, a magnetoresistive storage unit, or other medium structures that store data through resistance modal changes.
所述抑制性读控制晶体管T21和兴奋性读控制晶体管T22的漏极通过同一权重线W与所述磁性隧道结MTJ的第二端电学连接。所述磁性隧道结的第一端包括第一铁磁片01,所述第二端包括第二铁磁片02。所述磁性隧道结MTJ还包括第一铁磁片01和第二铁磁片02之间的绝缘夹层03。所述绝缘夹层03的厚度约为0.1nm~2.0nm。The drains of the inhibitory read control transistor T21 and the excitatory read control transistor T22 are electrically connected to the second terminal of the magnetic tunnel junction MTJ through the same weight line W. The first end of the magnetic tunnel junction includes a first ferromagnetic sheet 01 , and the second end includes a second ferromagnetic sheet 02 . The magnetic tunnel junction MTJ further includes an insulating interlayer 03 between the first ferromagnetic sheet 01 and the second ferromagnetic sheet 02 . The thickness of the insulating interlayer 03 is about 0.1 nm˜2.0 nm.
上述磁性隧道结MTJ的电阻特性时在特定的外部电信号刺激下,电阻由于自身材料的特性而产生电阻值变化,从而起到存储信息的作用。随着磁性记 忆层的体积的缩减,写或转换操作需注入的自旋极化电流也越小。因此,这种写方法可同时实现器件微型化和降低电流。现有技术中目前在写入电流(~50μA)、功耗、写入速度(~10ns)、单元面积(~50F 2)等特性皆与理想的数字还有些差距。究其基本原因,是STT(自旋转移矩,Spin Transfer Torque)翻转机制中携带自旋流的载子为电子,而电子的质量很轻,只有质子的1/1840。而学过最基础物理的都知道,力或者转矩都与质量成正比,因此既使电流可以携带自旋流而对磁矩产生转矩,效率不会太高。因为翻转效率低,所以写入速度慢,电流要大、功耗也大。又因为要提供较大电流,需要较大的CMOS,比MTJ还大,成为微缩瓶颈。在量子力学中,自旋与原子轨域的角动量可以交互作用,但这力或转矩的根源是原子核。而原子序愈大的原子,自旋轨道交互作用愈大;在有些特殊的物质,譬如拓朴绝缘体(topological insulator),其表面也有异常大的自旋轨道交互作用。利用自旋轨道转矩(SOT,Spin Orbit Torque)效应来翻转MTJ中自由层的磁矩,这就是上述的SOT MRAM。 In the resistance characteristics of the above-mentioned magnetic tunnel junction MTJ, when stimulated by a specific external electrical signal, the resistance changes due to the characteristics of its own material, thereby playing the role of storing information. As the volume of the magnetic memory layer is reduced, the spin-polarized current that needs to be injected for writing or switching operations is also smaller. Therefore, this writing method enables device miniaturization and current reduction at the same time. In the prior art, characteristics such as writing current (~50μA), power consumption, writing speed (~10ns), and cell area (~50F 2 ) are still far from ideal figures. The basic reason is that the carriers carrying the spin current in the STT (Spin Transfer Torque) flip mechanism are electrons, and the mass of electrons is very light, only 1/1840 of that of protons. Anyone who has studied the most basic physics knows that force or torque is proportional to mass, so even if current can carry spin current and generate torque on magnetic moment, the efficiency will not be too high. Because of the low turnover efficiency, the writing speed is slow, the current is large, and the power consumption is also large. And because to provide a larger current, a larger CMOS is required, which is larger than the MTJ, which becomes the bottleneck of miniaturization. In quantum mechanics, spin interacts with the angular momentum of atomic orbitals, but the source of this force or torque is the nucleus. The larger the atomic number, the greater the spin-orbit interaction. In some special substances, such as topological insulators, there are also abnormally large spin-orbit interactions on the surface. The spin-orbit torque (SOT, Spin Orbit Torque) effect is used to flip the magnetic moment of the free layer in the MTJ, which is the above-mentioned SOT MRAM.
在上述神经元网络工作时,写控制晶体管T1打开后可以用于控制上述神经元网络中的磁性隧道结MTJ的状态:写控制晶体管T1打开并通过其对磁性隧道结MTJ施加足够大的电流,可以对磁性隧道结MTJ的状态进行改写,起到逻辑配置的作用;控制晶体管T1打开并通过其对磁性隧道结MTJ施加用于读出的电平,则权重线W的输出电流与磁性隧道结MTJ的电阻相关,起到输出作用。而通过软件选通抑制性读控制晶体管T21或兴奋性读控制晶体管T22进入工作状态。在兴奋性读控制晶体管T22进入工作状态时,磁性隧道结MTJ的状态被配置为兴奋性逻辑G +,则权重线W输出电流即为兴奋型逻辑I=∑G +V(t);反之,在抑制性读控制晶体管T21进入工作状态时,阵列中所有的磁性隧道结MTJ的状态被配置为抑制性逻辑G -,则权重线W输出电流即为抑制性型逻辑I=∑G -V(t)。后台通过软件分别记录这两个值并进行运算,得到最终结果。并且两个晶体管共用一个磁隧道结,可以做到兴奋信号和抑制信号在相反的情况下,电平完全一致而抵消,提高了深度学习算法的精确性。 When the above neuron network is working, the write control transistor T1 can be used to control the state of the magnetic tunnel junction MTJ in the above neuron network after being turned on: the write control transistor T1 is turned on and a sufficiently large current is applied to the magnetic tunnel junction MTJ through it, The state of the magnetic tunnel junction MTJ can be rewritten to play the role of logic configuration; the control transistor T1 is turned on and the level used for reading is applied to the magnetic tunnel junction MTJ through it, then the output current of the weight line W is related to the magnetic tunnel junction. The resistance of the MTJ is related and plays an output role. On the other hand, the inhibitory read control transistor T21 or the excitatory read control transistor T22 is turned on by software to enter the working state. When the excitatory read control transistor T22 enters the working state, the state of the magnetic tunnel junction MTJ is configured as the excitatory logic G + , then the output current of the weight line W is the excitatory logic I=∑G + V(t); otherwise, When the inhibitory read control transistor T21 enters the working state, the states of all the magnetic tunnel junctions MTJ in the array are configured as the inhibitory logic G , and the output current of the weight line W is the inhibitory logic I=∑G V( t). The software records these two values separately in the background and performs operations to obtain the final result. In addition, the two transistors share a magnetic tunnel junction, so that when the excitation signal and the inhibitory signal are opposite, the levels are exactly the same and canceled, which improves the accuracy of the deep learning algorithm.
上述技术方案上述技术方案对两个晶体管做出堆叠式优化,将两个晶体管做堆叠式涉及,降低了电路的整体面积。The above technical solution The above technical solution optimizes the stacking of two transistors, and involves the stacking of the two transistors, thereby reducing the overall area of the circuit.
以上所述仅是本发明的优选实施方式,应当指出,对于本技术领域的普通 技术人员,在不脱离本发明原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。The above are only the preferred embodiments of the present invention. It should be pointed out that for those skilled in the art, without departing from the principles of the present invention, several improvements and modifications can also be made, and these improvements and modifications should also be regarded as It is the protection scope of the present invention.

Claims (7)

  1. 一种神经元网络单元,其特征在于,包括:A neuron network unit, characterized in that it includes:
    并列设置的多个读控制晶体管;Multiple read control transistors arranged in parallel;
    与上述多个晶体管纵向堆叠对顶设置的写控制晶体管;a write control transistor vertically stacked on top of the above-mentioned plurality of transistors;
    所述写控制晶体管与读控制晶体管的源极相背设置;The write control transistor and the source of the read control transistor are arranged opposite to each other;
    所述写控制晶体管的漏极与一存储介质单元的第一端电学连接;The drain of the write control transistor is electrically connected to the first end of a storage medium unit;
    所述读控制晶体管的漏极通过权重信号线与所述存储介质单元的第二端电学连接;The drain of the read control transistor is electrically connected to the second end of the storage medium unit through a weighted signal line;
    所述兴奋性读控制晶体管的漏极通过权重信号线与所述磁性隧道结的第二端电学连接。The drain of the excitatory read control transistor is electrically connected to the second terminal of the magnetic tunnel junction through a weighted signal line.
  2. 根据权利要求1所述的神经元网络单元,其特征在于,所述并列设置的多个晶体管包括一抑制性读控制晶体管和一兴奋性读控制晶体管。The neuron network unit according to claim 1, wherein the plurality of transistors arranged in parallel include an inhibitory read control transistor and an excitatory read control transistor.
  3. 根据权利要求1所述的神经元网络单元,其特征在于,所述介质存储单元选自于磁性隧道结、相变存储单元、以及磁阻存储单元中的任意一种。The neuron network unit according to claim 1, wherein the medium storage unit is selected from any one of a magnetic tunnel junction, a phase-change storage unit, and a magnetoresistive storage unit.
  4. 根据权利要求3所述的神经元网络单元,其特征在于,所述磁性隧道结的第一端包括第一铁磁片,所述第二端包括第二铁磁片。The neuron network unit of claim 3, wherein the first end of the magnetic tunnel junction includes a first ferromagnetic sheet, and the second end includes a second ferromagnetic sheet.
  5. 根据权利要求4所述的神经元网络单元,其特征在于,所述磁性隧道结还包括第一铁磁片和第二铁磁片之间的绝缘夹层。The neuron network unit according to claim 4, wherein the magnetic tunnel junction further comprises an insulating interlayer between the first ferromagnetic sheet and the second ferromagnetic sheet.
  6. 根据权利要求5所述的神经元网络单元,其特征在于,所述绝缘夹层的厚度约为0.1nm~2.0nm。The neuron network unit according to claim 5, wherein the thickness of the insulating interlayer is about 0.1 nm˜2.0 nm.
  7. 根据权利要求1所述的神经元网络单元,其特征在于,所述写控制晶体管、以及多个读控制晶体管均包括源极、漏极,以及源极和漏极之间的垂直沟道,栅极环绕所述垂直沟道设置。The neuron network unit according to claim 1, wherein the write control transistor and the plurality of read control transistors include a source electrode, a drain electrode, and a vertical channel between the source electrode and the drain electrode, and the gate electrode A pole is disposed around the vertical channel.
PCT/CN2020/135749 2020-11-30 2020-12-11 Neuronal network unit WO2022110327A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011370366.X 2020-11-30
CN202011370366.XA CN112465128B (en) 2020-11-30 2020-11-30 Neuronal network element

Publications (1)

Publication Number Publication Date
WO2022110327A1 true WO2022110327A1 (en) 2022-06-02

Family

ID=74804852

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/135749 WO2022110327A1 (en) 2020-11-30 2020-12-11 Neuronal network unit

Country Status (2)

Country Link
CN (1) CN112465128B (en)
WO (1) WO2022110327A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102439723A (en) * 2009-04-16 2012-05-02 希捷科技有限公司 Three dimensionally stacked non-volatile memory units
CN105976022A (en) * 2016-04-27 2016-09-28 清华大学 Circuit structure, artificial neural network and method of simulating synapse using circuit structure
US20200083286A1 (en) * 2018-09-11 2020-03-12 Intel Corporation Stacked transistor bit-cell for magnetic random access memory
CN111656371A (en) * 2018-01-23 2020-09-11 美商安纳富来希股份有限公司 Neural network circuit with non-volatile synapse array
CN111670444A (en) * 2018-01-24 2020-09-15 国际商业机器公司 Synaptic memory
US20200312906A1 (en) * 2019-03-27 2020-10-01 International Business Machines Corporation Stackable symmetrical operation memory bit cell structure with bidirectional selectors

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106558333B (en) * 2015-09-29 2018-11-09 中国科学院物理研究所 Spin transfer torque MAGNETIC RANDOM ACCESS MEMORY including annular magnet tunnel knot
CN110890115A (en) * 2018-09-07 2020-03-17 上海磁宇信息科技有限公司 Spin orbit torque magnetic memory

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102439723A (en) * 2009-04-16 2012-05-02 希捷科技有限公司 Three dimensionally stacked non-volatile memory units
CN105976022A (en) * 2016-04-27 2016-09-28 清华大学 Circuit structure, artificial neural network and method of simulating synapse using circuit structure
CN111656371A (en) * 2018-01-23 2020-09-11 美商安纳富来希股份有限公司 Neural network circuit with non-volatile synapse array
CN111670444A (en) * 2018-01-24 2020-09-15 国际商业机器公司 Synaptic memory
US20200083286A1 (en) * 2018-09-11 2020-03-12 Intel Corporation Stacked transistor bit-cell for magnetic random access memory
US20200312906A1 (en) * 2019-03-27 2020-10-01 International Business Machines Corporation Stackable symmetrical operation memory bit cell structure with bidirectional selectors

Also Published As

Publication number Publication date
CN112465128B (en) 2024-05-24
CN112465128A (en) 2021-03-09

Similar Documents

Publication Publication Date Title
US7313015B2 (en) Storage element and memory including a storage layer a magnetization fixed layer and a drive layer
Zhang et al. Stateful reconfigurable logic via a single-voltage-gated spin hall-effect driven magnetic tunnel junction in a spintronic memory
US9218863B2 (en) STT-MRAM cell structure incorporating piezoelectric stress material
JP5288529B2 (en) Magnetic memory element
TW201735026A (en) Magnetic memory
US7200036B2 (en) Memory including a transfer gate and a storage element
Li et al. An overview of non-volatile memory technology and the implication for tools and architectures
US8331136B2 (en) Recording method of nonvolatile memory and nonvolatile memory
WO2019189895A1 (en) Neural network circuit device
US20090034326A1 (en) Methods and apparatus for thermally assisted programming of a magnetic memory device
EP1918937B1 (en) Storage element and memory
Wang et al. Investigating ferroelectric minor loop dynamics and history effect—Part I: Device characterization
CN103971726A (en) Voltage assisted spin transfer torque magnetic random access memory writing scheme
US20240005974A1 (en) Self-reference storage structure and storage and calculation integrated circuit
US20220044103A1 (en) Matrix-vector multiplication using sot-based non-volatile memory cells
JP2006501587A (en) Magnetoresistive memory cell array and MRAM memory including this array
Wang et al. Multi-level neuromorphic devices built on emerging ferroic materials: A review
Thirumala et al. Reconfigurable Ferroelectric transistor–Part II: Application in low-power nonvolatile memories
CN113744779A (en) Magnetoresistive memory unit, write control method and memory module
WO2022110327A1 (en) Neuronal network unit
Lee et al. Review of candidate devices for neuromorphic applications
Kim et al. A comprehensive review of advanced trends: from artificial synapses to neuromorphic systems with consideration of non-ideal effects
WO2022110326A1 (en) Magnetic tunnel junction memory
Raman A Review on Non-Volatile and Volatile Emerging Memory Technologies
Lim et al. Multibit MRAM using a pair of memory cells

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20963215

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 241023)