CN107924485B - Electronic neural network circuit with resistance-based learning rule circuit - Google Patents

Electronic neural network circuit with resistance-based learning rule circuit Download PDF

Info

Publication number
CN107924485B
CN107924485B CN201680048968.9A CN201680048968A CN107924485B CN 107924485 B CN107924485 B CN 107924485B CN 201680048968 A CN201680048968 A CN 201680048968A CN 107924485 B CN107924485 B CN 107924485B
Authority
CN
China
Prior art keywords
circuit
learning rule
neuron
learning
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680048968.9A
Other languages
Chinese (zh)
Other versions
CN107924485A (en
Inventor
C·奥古斯丁
S·保尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of CN107924485A publication Critical patent/CN107924485A/en
Application granted granted Critical
Publication of CN107924485B publication Critical patent/CN107924485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning

Abstract

An apparatus is described. The device includes a semiconductor chip. The semiconductor chip includes a spiking neural network circuitry. The spiking neural network circuitry includes a learning rule circuit. The learning rule circuit includes a resistive element. The resistance of the resistive element is used to determine a change in weight of synapses between neurons of the spiking neural network circuitry.

Description

Electronic neural network circuit with resistance-based learning rule circuit
Technical Field
The field of the invention relates generally to the field of electronics, and more particularly to an electronic neural network circuit having a resistance-based learning rule circuit.
Background
In the field of computational science, artificial neural networks can be used to implement various forms of cognitive science, such as machine learning and artificial intelligence. Essentially, an artificial neural network is an adaptive information processing network with a design that is constructed in a manner similar to the human brain and is characterized as having a plurality of neurons interconnected by synapses.
Drawings
A better understanding of the present invention can be obtained from the following detailed description in conjunction with the following drawings, in which:
FIG. 1 illustrates a neural network;
FIGS. 2a and 2b illustrate a suppression learning rule and an excitation learning rule;
FIG. 3 illustrates the resistance of a magnetic tunnel junction device as a function of applied voltage or current;
FIG. 4 shows a circuit for implementing an electronic neural network;
FIG. 5 illustrates a first embodiment of a learning rule circuit;
FIG. 6 shows a second embodiment of a learning rule circuit;
FIG. 7a shows different resistance/learning rule curves as a function of applied voltage or current;
FIG. 7b shows circuitry for implementing the different resistance/learning rule curves of FIG. 7 a;
FIG. 8 illustrates an execution path and a learning path flowing through the same magnetic tunnel junction device;
fig. 9 illustrates a computing system.
Detailed Description
Fig. 1 shows a simplified depiction of a neural network 100. As observed in fig. 1, the network comprises a plurality of neurons 101 interconnected by a plurality of synapses 102. In operation, neurons 101 exchange messages between each other through synapses 102. Each of the synapses 102 has its own particular numerical weight that may be adjusted based on experience. Thus, the neural network 100 is adaptive and capable of learning.
One type of neural network, known as a "spiking" neural network, has synaptic messages in the form of impulses. Here, if the state of a neuron reaches a certain value, the neuron "transmits (fire)" a pulse/message to the neuron connected thereto. Simply put, the state value of a neuron will change as it receives pulses/messages from other neurons. If the amplitude of the received pulse activity reaches a certain intensity, the state of the receiving neuron may change to a level that causes the neuron to transmit.
The weight of a synapse affects the amplitude of the message it transmits. Pulse time Dependent Plasticity (STDP) is a learning function used to change the weight of synapses in a spiking neural network in response to pulse time differences on either end of the synapse. There are generally two types of STDP learning functions: a suppressed learning function and an excited learning function. The inhibition learning function is used for synapses whose messages tend to reduce the transmission activity of message receiving neurons. In contrast, the firing learning function is used for synapses whose messages tend to facilitate the transmission activity of message-receiving neurons.
By application of a learning function, the weight of synapses will change in view of observed pre-neuron firing and post-neuron firing, which in turn corresponds to the learning activity of the network. Fig. 2a shows an STDP-suppressed learning function and fig. 2b shows an STDP-stimulated learning function. For both functions, Δ t corresponds to the difference in firing time between neurons on either side of the synapse, while Δ z corresponds to the change in synaptic weight.
A problem with the implementation and construction of practical spiking neural networks is the absolute number of synapses. Here, it is noted from fig. 1 that since a single neuron may be connected to many other neurons, the number of synapses may greatly exceed the number of neurons. Given that a certain level of intelligence of critical quality typically requires a large number of neurons, the number of synapses required to implement an actual spiking neural network can be extremely large (e.g., one hundred or one thousand times the number of neurons).
Thus, the fabrication of semiconductor chips whose constituent circuitry is designed to implement a spiking neural network faces such a challenge: attempts have been made to implement synapses using a reduced number of active devices in order to reduce their overall size and manufacturing complexity.
One solution to the problems described in the background art is to construct a synapse circuit with Magnetic Tunneling Junction (MTJ) devices. Magnetic tunneling devices exhibit either high or low resistance depending on the relative orientation of the two magnetic moments within the device. Here, according to one type of MTJ implementation, the device has a low Resistance (RL) when the magnetic moment of a first magnetic layer (e.g., a fixed layer) of the device is pointing in the same direction as a second magnetic layer (e.g., a free layer) of the device. In contrast, referring to fig. 3, the MTJ device has a high Resistance (RH) when the magnetic moment of the first magnetic layer is directed in the opposite direction to the second magnetic layer. In various embodiments, the magnetic moment of the fixed layer does not change direction, while the magnetic moment of the free layer does change direction.
As observed in fig. 3, the high resistance state of the MTJ device will exhibit a change in resistance as a function of applied voltage or current that is very similar in shape to the suppressed STDP learning function of fig. 2 a. Thus, if the voltage or current applied to the high resistance state MTJ device represents a time difference between neurons in the neural network, the resistance of the MTJ device may be used to establish a change in weight for synapses between the neurons. That is, the MTJ device may be used to implement a suppressed learning rule.
Fig. 4 shows a generic neural network circuit 400 with first and second circuits 401, 402 implementing first and second neurons. In various embodiments, a neuron circuit comprises circuitry for maintaining a certain state (e.g., a register, flip-flop, capacitor, etc.) and circuitry for transmitting a message if the state reaches a certain level.
The timing measurement circuit 403 measures the difference between the transmission times of the two neuron circuits 401, 402, and generates an output signal (e.g., a digital signal, a voltage, or a current) representing the difference in transmission time in terms of amplitude and polarity. Specifically, if the posterior neuron transmits 402 after the anterior neuron 401 (where the pulse/message propagates along the execution path from the anterior neuron 401 to the posterior neuron 402), Δ t is positive and the time circuit 403 will generate a signal of a first polarity (e.g., positive) whose amplitude represents the time difference.
The signal is then applied to the learning circuit 404 having the MTJ device 405 in a high resistance state. The input signal from the timing measurement circuit 404 is processed by the learning rule circuit 404 in such a manner that a representative signal is applied to the MTJ device 405, and the resistance of the device is measured.
For example, if a voltage representing Δ t is applied across the terminals of the MTJ device, the resultant current flowing through the MTJ device is measured to determine the resistance of the MTJ device. Likewise, if a current representing Δ t is driven through the MTJ device, the resultant voltage across the MTJ device is measured to determine the resistance of the device. The measured resistance is then used to generate the input signal to the weighting circuit 406.
Here, it is recalled that the measured resistance of the MTJ device represents the weight change of the synapse between the two neurons 401, 402. In response to the input signal received from the learning rules circuit 404, the weight circuit 406 calculates a new weight value for the synapse. The message may then continue along the execution path from the preceding neuron through the weight circuit 406 to the following neuron to apply the new weight to the message. Thus, each time the timing measurement circuit sends a new signal along the learning path, a new weight may be applied to the synapse.
In various embodiments, learning rules circuit 404 may implement a suppression or firing rule according to an input control signal provided as a value from a register (not shown). Here, the values in the registers may be loaded as part of the configuration of the neural network circuit.
Fig. 5 illustrates an embodiment of a learning rules circuit 504. Here, referring briefly back to fig. 2a and 2b, note that the excitation function of fig. 2b can be considered as the suppression function of fig. 2a, but a slight modification is made that reverses the polarity of the positive time difference. That is, to the left of the origin along the horizontal axis, the rules of fig. 2a and 2b are the same. In contrast, to the right of the origin along the horizontal axis, the rule of fig. 2b is the same magnitude as fig. 2a, but opposite polarity.
Referring to FIG. 5, circuitry 501 implements the inhibit synapse of the learning rule circuit 504, while circuitry 502 implements the fire synapse of the learning rule circuit 504. That is, if learning rules circuit 504 is used to implement a throttling learning rule, multiplexer 509 selects channel 501. In contrast, if learning rules circuit 504 is used to implement firing learning rules, multiplexer 509 selects channel 502. Recall that, for example, a value in a register may establish an appropriate learning rule and provide an input signal to the learning rule circuit to indicate which learning rule is to be applied.
According to the learning rule circuit of fig. 5, the resistance of the MTJ device 503 is measured by: the current is driven through the device 503, the voltage across the device 503 is measured using the a/D converter 507, and the measured voltage is divided by the measured current using the logic circuit 508 (other embodiments may choose to apply a voltage to the device and measure the current through the device). Note that the MTJ curves of fig. 3 are symmetric across the horizontal axis. That is, regardless of the polarity of the applied voltage or current, the MTJ device exhibits a particular resistance. As such, current source circuitry 505 only drives current in one direction regardless of the polarity of the Δ t measurement.
Thus, current source circuitry 505 accepts an input indicative of the magnitude of the time difference between the anterior and posterior neurons (e.g., as provided by timing measurement circuit 403 of fig. 4), and drives a current through MTJ device 503 that is proportional to the Δ t magnitude. The at magnitude input signal turns on a plurality of drive current transistors (for larger magnitude time differences, more drive current transistors are turned on to drive more current, and for smaller magnitude time differences, fewer drive current transistors are turned on to drive less current). An analog-to-digital converter 506 receiving the Δ t amplitude input signal converts the signal into a plurality of bits (e.g., in the form of a thermometer code) to turn on the appropriate number of drive current transistors.
The polarity of the time difference measurement has no effect on suppressing the learning rule output. Therefore, the suppression of the learning channel 501 is just a direct reading of the MTJ device resistance. In an embodiment, the MTJ device resistance as provided by the logic circuit 508 is considered to have a value of positive polarity.
In contrast, the polarity of the time difference measurement has an effect on the excitation learning rule output. Specifically, in the case of a negative Δ t measurement, the output resistance is positive and thus is exactly the output of the logic circuit 508. In contrast, in the case of a positive Δ t measurement result, the output value of the learning rule is negative, and the correct output is a resistance value from the logic circuit, but with a negative polarity. As such, the firing channel includes two sub-channels, one providing a positive resistance and the other providing a negative resistance. Multiplexer 510 selects the positive subchannel in the case of a negative Δ t measurement and the negative subchannel in the case of a positive Δ t measurement.
Fig. 5 provides an embodiment of a mixed signal learning rule circuit that includes both analog and digital signal processing features. In contrast, fig. 6 shows an embodiment of the analog learning rule circuit 604. Here, an analog input signal Δ t is received indicating both the magnitude and polarity of the time difference measurement. The Δ t time difference input signal is provided to a commutating amplifier 601 that provides the absolute value of the magnitude of Δ t multiplied by a constant A (A may be unity).
As such, as the magnitude of the time difference (Δ t) increases, the amount of voltage applied to the MTJ device 603 increases. Ammeter circuit 602 measures current through MTJ device 603, while voltmeter circuit 605 measures current through MTJ device 603. A division circuit 606 receives the outputs of current meter 602 and voltage meter 605 and determines the resistance of the MTJ circuit (V/I — R).
A pair of switch circuits 608, 609 will provide an output of the learning rule circuit according to: 1) a control signal (e.g., as provided by a configuration register) indicating whether a suppressed learning rule or an excited learning rule is to be implemented; and 2) the polarity of the Δ t time difference signal. In the embodiment of fig. 6, if the control signal indicates that the inhibit learning rule is to be applied, then the NFET device of switch 609 is "on" and the PFET device of switch 609 is "off. In this state, the positive polarity output of the division circuit 606 is fed directly into the learning circuit output regardless of the polarity of the Δ t measurement signal.
In contrast, if the control signal indicates that the firing learning rule is to be applied, then the NFET device of switch 609 is "off" and the PFET device of switch 609 is "on". In this state, the output of the learning circuit is either a positive polarity output of the divide circuit 606 (if Δ t is positive, NFET of switch 608 is "on" and PFET of switch 608 is "off") or a negative polarity output of the divide circuit 606 as produced by the unity inverting amplifier 607 (if Δ t is negative, NFET of switch 608 is "off and PFET of switch 608 is" on "). For simplicity, a pass gate (pass gate) structure 609 has been shown for ease of illustration. To avoid voltage drop problems across the gate structure, a pass gate structure 609 may be replaced with a transmission gate structure.
An improvement to the embodiments of the circuitry of fig. 5 and 6 is the ability to change the slope and amplitude of the learning rule in a programmable manner. Here, fig. 7a shows the suppression rule curve as characterized by different linear slopes and vertical axis intercepts of the resistance through the MTJ resistor node. Various embodiments may desire learning rules of different heights/slopes, and thus the ability to change the learning rule curve (in terms of its height and slope) may be desirable. Fig. 7b depicts an improvement in programmable height and slope of the learning curve that can be applied to either of the circuits of fig. 5 or 6 to achieve the circuit implementation. Here, by placing a programmable resistance in parallel with the MTJ device, the effective resistance through the MTJ device can be adjusted to achieve various slope and height learning rules.
In an embodiment, each of the respective parallel resistances has two programmable states: open circuit or resistance R. When in an open circuit state, the parallel resistance has no effect on the circuit. However, when the parallel resistance is activated in the resistive R state, the parallel resistance reduces the resistance of the MTJ device, which in turn reduces the slope of the learning rule and its longitudinal axis intercept. Each parallel resistance is individually set to an open/R state to allow a wide range of different learning rule slopes by activating/deactivating different combinations of parallel resistances. The more parallel resistors that are activated, the more the slope and height of the learning rule of the circuit decreases. Other embodiments may choose to implement a programmable resistance range (e.g., each resistance may be set to any of an open circuit, a maximum resistance R, and a plurality of resistance values between open circuit and R). It is contemplated that similar variations in the shape of the resistance/regulation curve may be achieved by placing a programmable resistance value in series with the MTJ. We can also have an implementation: in the described embodiments, the programmable resistance may be a replacement MTJ in order to implement a learning function with different height(s) and slope(s).
Referring again to fig. 4, note that the basic model of the neural network includes a learning rule circuit 404 that feeds changes in weight input values to a weight circuit 406. The learning rule circuit output for determining the weight change is established by means of a learning path. The actual use of weights with weight changes occurs when the current neuron transmits a message to the post neuron along the execution path. As observed in fig. 4, the learning path and learning rule circuitry are substantially isolated from the execution path and weight circuitry.
Fig. 8 illustrates another improvement that may potentially be achieved with a learning rule implementation based on MTJ devices. According to the design principle of fig. 8, because the weight variation is established as the resistance of the MTJ device 803, the execution path can flow directly through the MTJ803 if the weight itself W is also established as the resistance. In this case, both the learning path and the execution path will flow through the MTJ 803. Here, as observed in fig. 8, the weight circuit 806 sets the basic weight with a resistance having a value W. The execution path then flows through the resistance W and MTJ803 to affect the weight and weight change.
Although the above embodiments have been described with reference to MTJs, in various other embodiments, another type of resistive element may be used in place of MTJs as described above. Here, any resistance device having the property of voltage-dependent resistance with a certain slope and height may be used for the STDP learning.
The neural network circuitry discussed herein may be embodied in various semiconductor circuits, at least some of which may be integrated with a computing system, such as an intelligent machine learning peripheral (e.g., a voice or image recognition peripheral).
Fig. 9 shows a depiction of an exemplary computing system 900, such as a personal computing system (e.g., a desktop or laptop computer) or a mobile or handheld computing system (e.g., a tablet device or smartphone). As observed in fig. 9, a basic computing system may include a central processing unit 901 (which may include, for example, a plurality of general purpose processing cores and a main memory controller disposed on an application processor or a multi-core processor), a system memory 902, a display 903 (e.g., a touchscreen, a tablet), a local wired point-to-point link (e.g., USB) interface 904, various network I/O functions 905 (such as an ethernet interface and/or a cellular modem subsystem), a wireless local area network (e.g., WiFi) interface 906, a wireless point-to-point link (e.g., bluetooth) interface 907, and a global positioning system interface 908, various sensors 909_1 to 909_ N (e.g., one or more of a gyroscope, an accelerometer, a magnetometer, a temperature sensor, a pressure sensor, a humidity sensor, etc.), a camera 910, a camera, a display, and/or any other suitable information, A battery 911, a power management control unit 912, a speaker and microphone 913, and an audio encoder/decoder 914. Any of the sensors 901_1 to 909_ N, as well as the camera 910, may include a neural network sensor chip with MTJ learning rule circuitry as described above.
An application processor or multi-core processor 950 may include within its CPU 901 one or more general purpose processing cores 915, one or more graphics processing units 916, memory management functions 917 (e.g., a memory controller), and I/O control functions 918. The general processing core 915 typically executes an operating system as well as application software for the computing system. The graphics processing unit 916 typically performs graphics intensive functions to, for example, generate graphical information for presentation on the display 903. The memory control function 917 interfaces with the system memory 902. Power management control unit 912 generally controls the power consumption of system 900.
Each of the touchscreen display 903, the communication interfaces 904-907, the GPS interface 908, the sensors 909, the camera 910, and the speaker/microphone codecs 913, 914 can all be viewed as various forms of I/O (input and/or output) with respect to the overall computing system, which also includes integrated peripherals (e.g., the camera 910) where appropriate. Depending on the implementation, various of these I/O components may be integrated onto the application processor/multi-core processor 950, or may be located remotely from the die or outside the package of the application processor/multi-core processor 950.
Embodiments of the invention may include various processes as described above. The processes may be implemented as machine-executable instructions. The instructions may be used to cause a general-purpose or special-purpose processor to perform certain processes. Alternatively, the processes may be performed by specific hardware components that contain hardwired logic for performing the processes, or by any combination of programmed computer components and custom hardware components.
Elements of the present invention may also be provided as a machine-readable medium for storing the machine-executable instructions. The machine-readable medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs, and magneto-optical disks, flash memory, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, propagation media or other type of media/machine-readable medium suitable for storing electronic instructions. For example, the invention may be downloaded as a computer program which may be transferred from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a carrier wave or other propagation medium via a communication link (e.g., a modem or network connection).
In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader spirit and scope of the invention as set forth in the appended claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (20)

1. An apparatus for a neural network, comprising:
a semiconductor chip including spiking neural network circuitry, the spiking neural network circuitry comprising:
a first neuron circuit;
a second neuron circuit;
a synapse coupled between the first neuron circuit and the second neuron circuit; and
a learning rule circuit to receive an input signal indicative of an amount of time between respective emissions of the first neuron circuit and the second neuron circuit and an input control signal indicative of a learning rule to be applied;
the learning rule circuit further includes a resistive element that exhibits a change in resistance as a function of applied voltage and/or current,
the learning rule circuit is to:
driving a signal through the resistive element proportional to a magnitude of the amount of time between respective emissions of the first neuron circuit and the second neuron circuit; and
generating an output signal indicative of a change in a weight of the synapse based on a resistance of the resistive element and the input control signal.
2. The apparatus of claim 1, wherein the learning rule circuit is to implement at least one of:
suppression rules;
rules are fired.
3. The apparatus of claim 2, wherein the learning rule circuit is a mixed signal circuit.
4. The apparatus of claim 2, wherein the learning rule circuit is an analog circuit.
5. The apparatus of claim 1, further comprising one or more programmable resistors coupled to the resistive element.
6. The apparatus of claim 1, wherein the resistive element is a magnetic tunneling junction device.
7. The apparatus of claim 1, wherein a learning path and an execution path flow through the resistive element.
8. A computing system for a neural network, comprising:
one or more processors;
a memory controller coupled to a system memory;
a sensor comprising a semiconductor chip including spiking neural network circuitry, the spiking neural network circuitry comprising:
a first neuron circuit;
a second neuron circuit;
a synapse coupled between the first neuron circuit and the second neuron circuit; and
a learning rule circuit to receive an input signal indicative of an amount of time between respective emissions of the first neuron circuit and the second neuron circuit and an input control signal indicative of a learning rule to be applied;
the learning rule circuit further includes a resistive element that exhibits a change in resistance as a function of applied voltage and/or current,
the learning rule circuit is to:
driving a signal through the resistive element proportional to a magnitude of the amount of time between respective emissions of the first neuron circuit and the second neuron circuit; and
generating an output signal indicative of a change in a weight of the synapse based on a resistance of the resistive element and the input control signal.
9. The computing system of claim 8, wherein the learning rule circuit is to implement at least one of:
suppression rules;
rules are fired.
10. The computing system of claim 9 wherein the learning rule circuit is a mixed signal circuit.
11. The computing system of claim 9 wherein the learning rule circuit is an analog circuit.
12. The computing system of claim 8, further comprising one or more programmable resistances coupled to the resistive element.
13. The computing system of claim 8 wherein the resistive element is a magnetic tunnel junction device.
14. The computing system of claim 8, wherein a learning path and an execution path flow through the resistive element.
15. An apparatus for a neural network, comprising:
a semiconductor chip including spiking neural network circuitry, the spiking neural network circuitry comprising:
a first neuron circuit;
a second neuron circuit;
a synapse coupled between the first neuron circuit and the second neuron circuit; and
a learning rule circuit to receive an input signal indicative of an amount of time between respective emissions of the first neuron circuit and the second neuron circuit and an input control signal indicative of a learning rule to be applied;
the learning rule circuit further includes a magnetic tunnel junction device that exhibits a change in resistance as a function of applied voltage and/or current,
the learning rule circuit is to:
driving a signal through the magnetic tunnel junction device proportional to a magnitude of the amount of time between respective emissions of the first neuron circuit and the second neuron circuit; and
generating an output signal indicative of a change in a weight of the synapse based on a resistance of the magnetic tunneling junction device and the input control signal.
16. The apparatus of claim 15, wherein the learning rule circuit is to implement at least one of:
suppression rules;
rules are fired.
17. The apparatus of claim 16, wherein the learning rule circuit comprises another input to receive an indication of whether the suppression rule or the incentive rule is to be implemented.
18. The apparatus of claim 15, wherein the learning rule circuit is a mixed signal circuit.
19. The apparatus of claim 15, wherein the learning rule circuit is an analog circuit.
20. The apparatus of claim 15, further comprising one or more programmable resistors coupled to the resistive element.
CN201680048968.9A 2015-09-23 2016-07-18 Electronic neural network circuit with resistance-based learning rule circuit Active CN107924485B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US14/863,138 US20170083813A1 (en) 2015-09-23 2015-09-23 Electronic neural network circuit having a resistance based learning rule circuit
US14/863,138 2015-09-23
PCT/US2016/042839 WO2017052729A1 (en) 2015-09-23 2016-07-18 Electronic neural network circuit having a resistance based learning rule circuit

Publications (2)

Publication Number Publication Date
CN107924485A CN107924485A (en) 2018-04-17
CN107924485B true CN107924485B (en) 2021-12-14

Family

ID=58282524

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201680048968.9A Active CN107924485B (en) 2015-09-23 2016-07-18 Electronic neural network circuit with resistance-based learning rule circuit

Country Status (4)

Country Link
US (1) US20170083813A1 (en)
EP (1) EP3353719B1 (en)
CN (1) CN107924485B (en)
WO (1) WO2017052729A1 (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11106966B2 (en) 2017-03-13 2021-08-31 International Business Machines Corporation Battery-based neural network weights
US10331999B2 (en) * 2017-04-03 2019-06-25 Gyrfalcon Technology Inc. Memory subsystem in CNN based digital IC for artificial intelligence
US10534996B2 (en) * 2017-04-03 2020-01-14 Gyrfalcon Technology Inc. Memory subsystem in CNN based digital IC for artificial intelligence
WO2019097513A1 (en) * 2017-11-14 2019-05-23 Technion Research & Development Foundation Limited Analog to digital converter using memristors in a neural network
KR20200026455A (en) * 2018-09-03 2020-03-11 삼성전자주식회사 Artificial neural network system and method of controlling fixed point in artificial neural network
CN110889260B (en) * 2018-09-05 2023-01-17 长鑫存储技术有限公司 Method and device for detecting process parameters, electronic equipment and computer readable medium
US11182686B2 (en) 2019-03-01 2021-11-23 Samsung Electronics Co., Ltd 4T4R ternary weight cell with high on/off ratio background
CN112183734A (en) * 2019-07-03 2021-01-05 财团法人工业技术研究院 Neuron circuit
CN111725386B (en) * 2019-09-23 2022-06-10 中国科学院上海微系统与信息技术研究所 Magnetic memory device and manufacturing method thereof, memory and neural network system
CN111459205B (en) * 2020-04-02 2021-10-12 四川三联新材料有限公司 Heating appliance control system based on reinforcement learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103201610A (en) * 2010-10-29 2013-07-10 国际商业机器公司 Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation
CN103917992A (en) * 2011-11-09 2014-07-09 高通股份有限公司 Method and apparatus for using memory in probabilistic manner to store synaptic weights of neural network

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090259609A1 (en) * 2008-04-15 2009-10-15 Honeywell International Inc. Method and system for providing a linear signal from a magnetoresistive position sensor
US8447714B2 (en) * 2009-05-21 2013-05-21 International Business Machines Corporation System for electronic learning synapse with spike-timing dependent plasticity using phase change memory
US20100312736A1 (en) * 2009-06-05 2010-12-09 The Regents Of The University Of California Critical Branching Neural Computation Apparatus and Methods
JP2014517659A (en) * 2011-06-20 2014-07-17 ザ リージェンツ オブ ザ ユニヴァーシティー オブ カリフォルニア Nerve amplifier
US9208431B2 (en) * 2012-05-10 2015-12-08 Qualcomm Incorporated Method and apparatus for strategic synaptic failure and learning in spiking neural networks
KR102230784B1 (en) * 2013-05-30 2021-03-23 삼성전자주식회사 Synapse circuit for spike-timing dependent plasticity(stdp) operation and neuromorphic system
JP5659361B1 (en) * 2013-07-04 2015-01-28 パナソニックIpマネジメント株式会社 Neural network circuit and learning method thereof
US20150117087A1 (en) * 2013-10-31 2015-04-30 Honeywell International Inc. Self-terminating write for a memory cell

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103201610A (en) * 2010-10-29 2013-07-10 国际商业机器公司 Neuromorphic and synaptronic spiking neural network with synaptic weights learned using simulation
CN103917992A (en) * 2011-11-09 2014-07-09 高通股份有限公司 Method and apparatus for using memory in probabilistic manner to store synaptic weights of neural network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Integration of nanoscale memristor synapses in neuromorphic computing architectures";Giacomo Indiveri et al.;《arXiv》;20130227;第1-22页,第3-4节 *
"Tunnel junction based memristors as artificial synapses";Andy Thomas et al.;《frontiers in Neuroscience》;20150707;第1-9页,摘要,第5节 *
Giacomo Indiveri et al.."Integration of nanoscale memristor synapses in neuromorphic computing architectures".《arXiv》.2013, *

Also Published As

Publication number Publication date
CN107924485A (en) 2018-04-17
US20170083813A1 (en) 2017-03-23
EP3353719A4 (en) 2019-05-08
EP3353719A1 (en) 2018-08-01
WO2017052729A1 (en) 2017-03-30
EP3353719B1 (en) 2020-12-23

Similar Documents

Publication Publication Date Title
CN107924485B (en) Electronic neural network circuit with resistance-based learning rule circuit
US11579677B2 (en) Memristor crossbar arrays to activate processors
Wijesinghe et al. An all-memristor deep spiking neural computing system: A step toward realizing the low-power stochastic brain
Stromatias et al. Scalable energy-efficient, low-latency implementations of trained spiking deep belief networks on spinnaker
JP6477922B2 (en) Memristor neuromorphological circuit and method for training memristor neuromorphological circuit
US9754203B2 (en) Analog multiplier using a memristive device and method for implemening Hebbian learning rules using memrisor arrays
Chen et al. Associate learning and correcting in a memristive neural network
Koo et al. SBSNN: Stochastic-bits enabled binary spiking neural network with on-chip learning for energy efficient neuromorphic computing at the edge
US11868874B2 (en) Two-dimensional array-based neuromorphic processor and implementing method
JP2019511799A (en) Analog electronic neural network
US20190197391A1 (en) Homeostatic plasticity control for spiking neural networks
JP2016532216A (en) Method and apparatus for realizing a breakpoint determination unit in an artificial neural system
Baumann et al. Memristor‐enhanced humanoid robot control system–Part II: Circuit theoretic model and performance analysis
US9876979B1 (en) Current generator
Moradi et al. A VLSI network of spiking neurons with an asynchronous static random access memory
US20210201110A1 (en) Methods and systems for performing inference with a neural network
WO2020133492A1 (en) Neural network compression method and apparatus
US20200320385A1 (en) Using quantization in training an artificial intelligence model in a semiconductor solution
Yajima Ultra-low-power switching circuits based on a binary pattern generator with spiking neurons
US20170131358A1 (en) Fuel gauge system for measuring the amount of current in battery and portable electronic device including the same
KR20220066574A (en) Electronic device configured to process image data for training artificial intelligence system
CN109324941A (en) A kind of temperature acquisition method, terminal and storage medium
EP3933703A1 (en) Dynamic loading neural network inference at dram/on-bus sram/serial flash for power optimization
Stromatias Scalability and robustness of artificial neural networks
CN111580783A (en) Weight unit and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant